The ability to develop a custom barcode scanning application, compatible with either a specific mobile, tablet or other device type, would traditionally be in the realm for a skilled developer to implement fully. The key benefit this brings is the ability to develop a highly tailored business application, with almost limitless potential. The major downside is, of course, the requirement for having this specific expertise within a business and sufficient time to plan, develop and test the application.

When it comes to using some of the newly introduced weapons in the Microsoft Online Services arsenal (such as Microsoft Flow, a topic which has been a recent focus on the blog), these traditional assumptions around application development can be discarded and empowering tools are provided to developers & non-developers to enable them to create highly custom business application without writing a single line of code. A perfect example of what can be achieved as a consequence has already been hinted at. With PowerApps, you have the ability to create a highly functional barcode scanning application that can perform a variety of functions – register event attendees details into a SQL database, lookup a Product stored on Dynamics 365 Customer Engagement and much, much more. The assumingly complex becomes bafflingly simplistic.

I’ve been lucky to (finally!) have a chance to play around with PowerApps and the barcode scanning control recently. What I wanted to do as part of this week’s blog post was provide a few tips on using this control, with the aim of easing anyone else’s journey when using it for the first time.

Make sure your barcode type is supported – and factor in multiple barcode types, if required

The one thing you have to get your head around fairly early on is that the barcode control can only support 1 barcode type at any given time. At the time of writing this post, the following barcode types are supported:

  • Codabar
  • Code 128
  • Code 39
  • EAN
  • Interleaved 2 of 5
  • UPC

Note in particular the lack of current support for QR codes, which are essentially a barcode on steroids. There are, however, plans to introduce this as an option in the very near future, so we won’t have to wait too long for this.

Be sure to RTFM

It can be easy to forget this rather forceful piece of advice, but the importance of familiarising yourself with the barcode control, its properties and some of the acknowledged limitations can go a long way towards streamlining the deployment of a workable barcode scanner app. I’ve collated together a list of the most useful articles below, which I highly encourage you to read through in their entirety:

Whilst PowerApps is generally quite a good tool to jump into without prerequisite knowledge, working with specific controls – such as this one – warrants a much more careful study to ensure you fully understand what is and isn’t supported. What’s also quite helpful about the articles (jumping back to barcode types for a second) is that an example is provided on how to create a drop-down control to change the type of barcode that can be scanned at runtime. Typically, for an internal app where a defined barcode type is utilised consistently, the requirement for this may not be present; but it is nice to know the option is there.

For the best success rate when scanning, ensure the barcode control occupies as much space as possible

On one of the example posts above, an example is provided of a barcode control situated atop a Gallery control with a search box. Whilst this may look appealing from a usability aspect, your main issue will be in ensuring that devices using the app can properly scan any potential barcode. For example, when testing iOS devices in this scenario, I found it literally impossible to ensure a consistent scan rate. I’ve not tested with other mobile devices to confirm the same issue occurs, but given that the issue on iOS appears to be down to a “memory issue”, I can imagine the same problem occurring for lower spec Android devices as well. Microsoft’s recommendation for optimal barcode scanning controls provides a workaround solution that can be adapted:

To delay running out of memory on devices that are running iOS, set the Height property of the Barcode control to 700 (or lower) and the Scanrate property to 30.

With this in mind, you can create a dedicated screen on your PowerApp for barcode scanning that occupies sufficient space, whilst also providing an easy to operate experience for your end users. An optimal layout example is provided below:

Then, with a bit of expression/formula trickery, functionality for this screen can be nicely rounded out as follows:

  • Set the Default property value of the Text Input control to the name of the scanner control. This will then place the value of any scanned barcode within the input control, allowing the user to modify if necessary.
  • For the Scan button, define the Back() expression for the OnSelect action. Then, set the Default property value to the Text Input control on the screen above to whatever control needs to have the value of the scanned barcode control (e.g. if working with a gallery/search box, then you would set this on the search box).
  • Finally, the Cancel button needs to do the same as the Scan button, in addition to safely discarding whatever has been scanned. This can be achieved by specifying the Clear command for the control that is being passed the scanned value when the Scan button is selected. Rather handily, we can specify multiple expression/formulas for a single event via the use of a semi-colon delimiter. Therefore, to go back to the previous screen and clear the value of the Text Input control called MySearchBox, use the following expression:
    • Back();MySearchBox.Clear

By having a barcode control that mimics the size indicated above, scan rates for iOS devices are improved by a significant margin and the control becomes a dream to use 🙂

And the rest…

To round things off, there are a couple of other concepts surrounding barcode controls that I thought may be useful to point out:

  • The Camera property value is not documented at all for this control, but my understanding is that it provides a mechanism for specifying the device camera number/ID to use for the control. So, for example, if you have a Windows 10 tablet device and want to ensure that the front camera is used to scan barcodes for an entry registration system, you can use this option to force this. Utilising this setting, I would imagine, limits your app across multiple device types, so I would advise caution in its use. Let me know in the comments below if you have any joy using this property at all.
  • By default, the Barcode detection setting is enabled. This displays a yellow box when a possible scan area is detected and also a red line to indicate when a barcode value is read successfully. This setting is designed to assist when scanning, but I have noticed it can be erratic from time to time. It can be disabled safely without reducing the scanning functionality too much; you just won’t know straight away when you have scanned something successfully.
  • As the control is reliant on individual device camera privacy settings, you will need to ensure that any corporate device policies do not block the usage of the camera whilst using PowerApps and, in addition, you will need to give explicit camera access permission when launching the app for the first time. I did encounter a strange issue on iOS where the camera did not load after granting this permission. A device reboot seemed to resolve the issue.

Conclusions or Wot I Think

I am a strong believer in providing fast and effective resolution of any potential business requirement or concern. Typically, those working within an IT department will not be directly responsible for generating revenue or value for the business beyond IT support. This is what makes an attack-focused, proactive and value generating outlook an essential requirement, in my view, to anyone working in an IT-focused role. In tandem with this approach, having the tools at your disposal that will aid you in adhering to these tenents are ones that should be adopted readily and, after prudent assessment (i.e. balanced and not time costly), without fear. PowerApps, and the use cases hinted towards within this post, scores very highly on my list of essential tools in fostering this approach. It really ensures that typical “Power” users within an organisation can develop quick and easy solutions, with lack of previous experience not necessarily being a burden. In my own case, having had a firm background using Microsoft stack solutions in the past meant that my initial journey with PowerApps was that much easier. Ultimately, I think, PowerApps aims to save time and reduce the business concerns that arise from bloated software deployments, putting a much more business-focused onus on individuals to develop valuable solutions within a particular department or organisation.

This may be somewhat obvious from some of my more recent blog posts, but I am a huge fan of Application Insights at the moment. The best analogy I can use for it is that it’s a web developer Swiss Army Knife, that really puts you in the driving seat of understanding and maintaining your web application for the long term. The analytic tools available on the “knife” – from error reporting, user trends and server performance – provide a treasure trove of information that can help to shape successful software deployments and ensure that developers are properly understanding the impact of their work within a production environment. If you are developing web applications utilising the Azure stack, there really is no good reason not to use it; and, even if you are not, you can still utilise the tool wherever your web application is hosted.

Recently, whilst fiddling with the tool, I was surprised to find another feature which I had overlooked – Availability testing. In simple terms, this enables you to configure proactive monitoring of your web application to make sure it is responding to requests. This can be particularly useful if you have availability targets or SLAs that your web application needs to adhere to. The feature is also tightly integrated alongside the other performance metrics gathered by the service, making it the ideal choice for satisfying any data analytic requirements. I wasn’t aware of the feature until I implemented it recently, so what I wanted to do on this weeks blog post is demonstrate how straightforward it is to set up Availability testing within Application Insights.

Setting up your first Availability Test

What ultimately aids the journey towards configuring for your first test is the simplistic nature of the process; however, there are a couple of settings that may require further explanation. The remainder of this section will go through the steps involved from start to finish. Be sure to have provisioned an Application Insights resource before starting and confirm that your target website is readily accessible over the internet.

Once your Application Insights resource is provisioned, navigate to the Availability tab on the left-hand side of the resource blade:

This will then expand the Availability test summary window, which should be blank as indicated below. Click on the Add Test button to begin creating the test:

There are a couple of basic information fields that need populating at this juncture:

  • Test name: This shouldn’t need explaining 🙂 Keep in mind the importance of having a unique name for the test, particularly if you are planning on utilising the reporting capabilities discussed later on in this post.
  • Test type: You have two options at your disposal here – a simple URL ping test, that contacts the URL you specify and captures the response code from the server, or a Multi-step web test, a more powerful test that carries out a set of predefined steps when crawling through your website. Visual Studio Enterprise is required to put these tests together and there are pricing implications when using it. You can find out more about this option via the Microsoft Docs website. For this example, we are going to the use the URL ping test.
  • URL: This is the exact URL that will be contacted as part of the test. This will either be the root azurewebsites.net domain or your assigned custom domain.

Your Create test window should resemble the below after configuring the above:

The next step is to define the location where your test will be triggered from. If you are hosting your application in an existing Azure region, then it is recommended to select the next nearest geographical region for testing. Multiple regions can also be selected. In the example below, UK West is selected due to the website in question existing in the UK South region:

Next is to define your success/fail conditions for the test. This will vary depending on the type of test you a conducting, but for URL ping test‘s, you can define a timeout period for the test and also determine whether a specific HTTP response code is returned each time the test is executed. A 200 response code is the standard for indicating that a web page has loaded successfully, so this is the one that will be used for this example. You can also define this further by checking for specific keywords in the response text:

Finally, you can designate the alert settings for when a failure is detected. This can range from an email to a specified list of contacts (more on this shortly) through to a webhook notification to an external HTTP/HTTPS endpoint. Example settings are utilised in the below screenshot that should be suitable for most scenarios:

With everything configured, your Create test window should resemble the below if done correctly:

Press OK to create your test, which will begin executing immediately. You will need to wait some time for results to start appearing. For this example, we are attempting to query a website that has a specific deny rule for any non-allowed host, including the Availability test server. This is reflected in the results below, which indicate a 100% failure rate for the selected period:

Failed Test Notifications

As part of the example above, the test is configured to send email notifications out to Azure subscription owners when a fault with the application is detected. An example of how this looks can be seen below:

Whilst the above is useful for application developers/administrations to receive proactive notifications relating to their web applications, it doesn’t provide much from a reporting standpoint. Fortunately, this is where one of the features major benefits come into the equation.

Analysing Previous Availability Tests

Similar to other statistical information gathered by Application Insights, the results of each individual test are logged within Query Analytics as part of the availabilityResults schema. The following query can be used within Query Analytics to return key information for failed tests within a 24 hour period:

availabilityResults
| project timestamp, id, name, location, success, message, duration, appId, appName
| where success == 0 and timestamp > ago(24h)

If you are also wanting to utilise the Continuous Export feature, as discussed previously on the blog, then the great news is that this information is also fully exportable to your destination of choice. A SQL database, Power BI Dashboard, Azure Event Hub…your options are quite limitless 🙂 Below is the query that you can use to extract the key information from your Continuous Export stream:

SELECT 
    availabilityFlat.ArrayValue.testTimestamp AS TestTimestamp,
    availabilityFlat.ArrayValue.testRunId AS TestRunID,
    availabilityFlat.ArrayValue.testName AS TestName,
    availabilityFlat.ArrayValue.runLocation AS RunLocation,
    availabilityFlat.ArrayValue.result AS TestResult,
    availabilityFlat.ArrayValue.message AS Message,
    availabilityFlat.ArrayValue.durationMetric.value AS Duration,
    AR.EventProcessedUTCTime AS AvailabilityResultEventProcessedUTCTime
INTO AvailabilityResultsOutput
FROM AvailabilityResults AS AR
CROSS APPLY GetElements(AR.[availability]) AS availabilityFlat

Conclusions or Wot I Think

The best things in life are those that just keep on giving. My journey with Application Insights to date very much mirrors this. The number of business challenges and scenarios that I have been able to chuck towards it and walk away with an effective and easy-to-implement solution is growing every month. For me, it is slowly becoming the de facto tool to have deployed alongside any new web application. The ability to extend the tool further so that is not just providing ongoing benefit but proactive capabilities, via the Availability feature, is the main area where I feel the tool thrives the most, both now and in the future. Anything that can take the headache out of diagnosing enterprise-grade web application systems, whilst simultaneously driving insight into how a website experience can be improved for end users, wins in my book and Application Insights keeps continually proves itself in this regard.

If you are looking for an easy-to-use and highly expandable mail relay service, SendGrid represents the most developer-friendly solution out in the market today. What’s even better is that it’s available on Azure, making it the ideal choice if you are developing an existing solution on the Azure stack. The best thing I like about the service is the extensive documentation covering every aspect of its Web API, structured to provide a clear explanation of endpoint methods, required properties, and example outputs – exactly the right way that all technical documentation should be laid out.

I recently had a requirement to integrate with the SendGrid API to extrapolate email statistic information into a SQL database. My initial thoughts were that I would need to resort to a bespoke C# solution to achieve these requirements. However, keenly remembering my commitment this year to find opportunities to utilise the service more, I decided to investigate whether Microsoft Flow could streamline this process. Suffice to say, I was pleasantly surprised and what I wanted to do as part of this week’s blog post was demonstrate how I was able to take advantage of Microsoft Flow to deliver my requirements. In the process, I hope to get you thinking about how you approach integration requirements in the future, challenging some of the pre-conceptions around this.

Before we get into creating the Flow itself…

…you will need to create a table within your SQL database to store the requisite data. This script should do the trick:

CREATE TABLE [dbo].[SendGridStatistic]
(
	[SendGridStatisticUID] [uniqueidentifier] NULL DEFAULT NEWID(),
	[Date] DATE NOT NULL,
	[CategoryName] VARCHAR(50) NULL,
	[Blocks] FLOAT NOT NULL,
	[BounceDrops] FLOAT NOT NULL,
	[Bounces] FLOAT NOT NULL,
	[Clicks] FLOAT NOT NULL,
	[Deferred] FLOAT NOT NULL,
	[Delivered] FLOAT NOT NULL,
	[InvalidEmail] FLOAT NOT NULL,
	[Opens] FLOAT NOT NULL,
	[Processed] FLOAT NOT NULL,
	[SpamReportDrops] FLOAT NOT NULL,
	[SpamReports] FLOAT NOT NULL,
	[UniqueClicks] FLOAT NOT NULL,
	[UniqueOpens] FLOAT NOT NULL,
	[UnsubscribeDrops] FLOAT NOT NULL,
	[Unsubscribes] FLOAT NOT NULL
)

A few things to point out with the above:

  • The CategoryName field is only required if you are wishing to return statistic information grouped by category from the API. The example that follows primarily covers this scenario, but I will also demonstrate how to return consolidated statistic information as well if you wanted to exclude this column.
  • Microsoft Flow will only be able to map the individual statistic count values to FLOAT fields. If you attempt to use an INT, BIGINT etc. data type, then the option to map these fields will not appear. Kind of annoying, given that FLOATs are effectively “dirty”, imprecise numbers, but given the fact we are not working with decimal numbers, this shouldn’t cause any real problems.
  • The SendGridStatisticUID is technically optional and could be replaced by an INT/IDENTITY seed instead or removed entirely. Remember though that it is always good practice to have a unique column value for each table, to aid in individual record operations.

In addition, you will also need to ensure you have generated an API key for SendGrid that has sufficient privileges to access the Statistic information for your account.

With everything ready, we can now “flow” quite nicely into building things out. The screenshot below demonstrates how the completed Flow should look from start to finish. The sections that follow will discuss what is required for each constituent element

Recurrence

The major boon when working with Flow is the diverse options you have for triggering them – either based on certain conditions within an application or just simply based off a recurring schedule. For this example, as we will be extracting statistic information for an entire 24 period, you should ensure that the Flow executes at least once daily. The precise timing of this is up to you, but for this example, I have suggested 2 AM local time each day. The configured recurrence settings should resemble the below if done correctly:

You should be aware that when your Flow is first activated, it will execute straightaway, regardless of what settings you have configured above.

HTTP

As the SendGrid Web API is an HTTP endpoint, we can utilise the built-in HTTP connector to retrieve the information we need. This is done via a GET operation, with authentication achieved via a Raw header value containing the API key generated earlier. The tricky bit comes when building the URI and how we want the Flow to retrieve our information – namely, all statistic information covering the previous day. There is also the (optional) requirement of ensuring that statistic information is grouped by category when retrieved. Fortunately, we can get around this problem by using a bit of Expression trickery to build a dynamic URI value each time the Flow is executed. The expression code to use will depend on whether or not you require category grouping. I have provided both examples below, so simply choose the one that meets your specific requirement:

Retrieve Consolidated Statistics
concat('https://api.sendgrid.com/v3/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')))
Retrieve Statistics Grouped By Category
concat('https://api.sendgrid.com/v3/categories/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&categories=cat1&categories=cat2')

Note: For this example, statistic information would be returned only for the categories that equal cat1cat2. These should be updated to suit your requirements, and you can add on additional categories by extending the URI value like so: &categories=cat3&categories=cat4 etc.

Your completed HTTP component should resemble the below if done correctly. Note in particular the requirement to have Bearer and a space before specifying your API key:

Parse JSON

A successful 200 response to the Web API endpoint will return a JSON object, listing all statistic information grouped by date (and category, if used). I always struggle when it comes to working with JSON – a symptom of working too long with relational databases I think – and they are always challenging for me when attempting to serialize result sets. Once again, Flow comes to the rescue by providing a Parse JSON component. This was introduced with what appears to be little fanfare last year, but really proves its capabilities in this scenario. The only bit you will need to worry about is providing a sample schema so that the service can properly interpret your data. The Use sample payload to generate schema option is the surest way of achieving this, and you can use the example payloads provided on the SendGrid website to facilitate this:

Retrieve Consolidated Statistics: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/global.html

Retrieve Statistics Grouped By Category: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/categories.html

An example screenshot is provided below in case you get stuck with this:

Getting the records into the database

Here’s where things get confusing…at least for me when I was building out this flow for the first time. When you attempt to add in an Insert row step to the flow and specify your input from the Parse JSON step, Microsoft Flow will automatically add two Apply to each step to properly handle the input. I can understand why this is the case, given that we are working with a nested JSON response, but it does provide an ample opportunity to revisit an internet meme of old…

Just pretend Xzibit is Microsoft Flow…

With the above ready and primed, you can begin to populate your Insert row step. Your first step here will more than likely be to configure your database connection settings using the + Add New Connection option:

The nicest thing about this is that you can utilise the on-premise gateway service to connect to a non-cloud database if required. Usual rules apply, regardless of where your database is located – use a minimum privileged account, configure any required IP whitelisting etc.

With your connection configured, all that’s left is to provide the name of your table and then perform a field mapping exercise from the JSON response. If you are utilising the SendGridStatisticUID field, then this should be left blank to ensure that the default constraint kicks in correctly on the database side:

The Proof is in the Pudding: Testing your Flow

All that’s left now is to test your Flow. As highlighted earlier in the post, your Flow will automatically execute after being enabled, meaning that you will be in a position to determine very quickly if things are working or not. Assuming everything executes OK, you can verify that your database table resembles the below example:

This example output utilises the CategoryName value, which will result in multiple data rows for each date, depending on the number of categories you are working with. This is why the SendGridStatisticUID is so handy for this scenario 🙂

Conclusions or Wot I Think

When it came to delivering the requirements as set out in this posts introduction, I cannot overemphasise how much Microsoft Flow saved my bacon. My initial scoping exercise around this strongly led me towards having to develop a fully bespoke solution in code, with additional work than required to deploy this out to a dedicated environment for continuous execution. This would have surely led to:

  • Increased deployment time
  • Additional cost for the provisioning of a dedicated execution environment
  • Wasted time/effort due to bug-fixing or unforeseen errors
  • Long-term problems resulting from maintaining a custom code base and ensuring that other colleagues within the business could properly interpret the code correctly.

Instead, I was able to design, build and fully test the above solution in less than 2 hours, utilising a platform that has a wide array of documentation and online support and which, for our particular scenario, did not result in any additional cost. And this, I feel, best summarises the value that Microsoft Flow can bring to the table. It overturns many of the assumptions that you generally have to make when implementing complex integration requirements, allowing you to instead focus on delivering an effective solution quickly and neatly. And, for those who need a bit more punch due to very highly specific business requirements, then Azure Logic Apps act as the perfect meeting ground for both sides of the spectrum. The next time you find yourself staring down the abyss of a seemingly impossible integration requirement, take a look at what Microsoft Flow can offer. You just might surprise yourself.

Very much like a stopped clock telling the correct time twice a day, you can guarantee there will be two Dynamics 365 Customer Engagement releases each year. The first such occasion this year has come around quickly, with Microsoft setting out the stall for the Spring 2018 release earlier this week. The headline messages around this release are all around providing reassurance that the application is GDPR ready and in emphasising the maturity of Power Apps & Microsoft Flow as products within their own right and in conjunction with Dynamics 365. I’ve been studying the release notes in greater detail and, as part of this week’s blog post, I wanted to delve underneath the headlines and extrapolate some of the less touted, but potentially most impactful, new features that I am most looking forward to.

Answer Tags for Voice of the Customer Surveys

I made a commitment earlier this year to utilise the Voice of the Customer solution more. When used correctly, and if you are already heavily invested in Dynamics 365, the solution can present a straightforward and cost-effective way of starting to understand what customers are thinking, both with respect to specific experiences they have with your business and towards the organisation overall. One new feature to be introduced with Voice of the Customer, which I am looking forward to getting my hands on, is the ability to use Answer Tags to dynamically structure any subsequent questions within the survey. A good example of how this works in practice can be seen below, as shown in the release notes:

The key driver behind the automation of customer feedback tools should be to ensure that customers receive tailored and relevant surveys relating to services they have received, whilst also taking away any administrative headache when distributing and collating feedback answers. The feature above helps to solidify the benefits that Voice of the Customer can deliver when utilised in tandem with Dynamics 365 Customer Engagement, as well as allowing for more powerful and broadly applicable surveys to be structured at design time.

The rise of the Unified Interface

The rebrand of the entire Dynamics 365 Customer Engagement application has been much promised and touted over the past year. With this release, it becomes a reality. Pretty much every key application module – Customer Service, Sales, Field Service & Project Service Automation – has been updated to utilise the new Unified Interface. The following applications/solutions will also be Unified Interface ready as part of the release:

  • Dynamics 365 App for Outlook
  • LinkedIn Sales Navigator
  • Gamification

The Unified Interface is very much an offshoot of the Interactive Service Hub, which it now replaces fully as part of this release (Interactive Service Hub users should read this article carefully, as there are some important points to consider if you plan to upgrade in the near future). I saw the new unified interface in action when attending the CRMUG Meeting in Reading last year, and its introduction represents one of the ways Microsoft is investing heavily within the Dynamics 365 product moving forward. Its key benefits in comparison to the current experience can be summarised as follows:

  • Consistent end-user experience when accessing the application from desktop, mobile or tablet operating systems.
  • Fully mobile responsive template, that adjusts to your specific device to provide the optimal experience
  • Better utilisation of empty spacing across entity views, forms etc.

With this release, administrators and developers need to start actively considering the impact the Unified Interface has on their systems and plan accordingly. Whilst I imagine there to be some pain involved as part of this, the end result – a much crisper and effective end-user interface – is worth the trade-off.

PowerShell Management for PowerApps

Up until now, your options for the automation of administrative tasks for PowerApps were limited. This issue was addressed to a certain extent for Dynamics 365 Customer Engagement Online very recently, via the introduction of PowerShell modules to facilitate organisation backups, instance resets and/or administrative mode toggling. These types of tools can go a long way if you have implemented automated release management tools for your various environments, taking human error out of the equation and streamlining deployments.

PowerApps looks to be going in the right direction in this regard, as the Spring Wave release will introduce a set of cmdlets that allow for the following actions to be accomplished:

  • Environments and environment permissions
  • PowerApps and app permission
  • Flows and flow permissions
  • Export and import of resource packages across environments
  • PowerApps and Flow licenses report (of active users)

Whilst definitely more administrative as opposed to deployment focused, their introduction is no doubt a welcome step in the right direction.

Future of the Common Data Service

Microsoft released the Common Data Service (CDS) in late 2016, around the same time as Microsoft Flow and the Dynamics CRM rebrand. The premise was simple and admirable: a common framework for you to develop the data you need for your business, that is instantly re-usable across multiple applications. My chief concern when this was first announced is where this left the traditional customisation experience for Dynamics CRM/365 Customer Engagement, commonly referred to as xRM. Having to countenance potential redevelopments of “legacy” xRM systems, just to make them compatible with the CDS could prove to be a costly and unnecessary exercise; this can perhaps be summed up best by the old saying “If it ain’t broke, don’t fix it!”.

There seems to have been a recognition of this dilemma as part of this release, with the following announcement regarding the Common Data Service and PowerApps specifically:

This release also includes major advancements to the Common Data Service for Apps (the data platform that comes with PowerApps) and client UX creation tools. These new capabilities are backward-compatible with the Dynamics 365 platform (frequently called the xRM platform), which means that Dynamics 365 customizers and partners can use already-acquired skills to create apps with PowerApps.

What I think this means, in simple terms, is that the customisation experience between Dynamics 365 Customer Engagement and Power Apps will, in time, become virtually indistinguishable. And this is great for a number of reasons – it negates any excuse that individuals/organisations may raise to explore PowerApps further, gives us the ability to quickly develop our own custom mobile applications for our particular Dynamics 365 solution and provides an easy framework to unify business data across multiple applications. This very much parallels the intended experience that Power BI has for traditional Excel users – namely, providing an identical toolbox that can be leveraged to quickly deploy solutions with reduced technical debt. As with a lot of these announcements, we’re not going to know exactly how things operate until they are in our hands, but the immediate upshot appears to be the nullification of any new learning requirements for CDS.

If you are looking for further detail regarding this change, then the ever so excellent Jukka Niiranen has published a blog post which really breaks down the detail behind this better than I ever could 🙂

Yes, XRM Is The New Common Data Service

Email Notifications for Microsoft Flow Failures

Similar to Voice of the Customer, I also promised myself to use Microsoft Flow more this year. After some uneventful early testing, the tool has become (for me) an indisposable means of achieving integration requirements that would traditionally require custom code and a dedicated server environment to execute. Microsoft Flows do get some much-deserved love and attention as part of this release, and the one new feature which I think is going to be of the biggest help is email notifications for flow failures. The announced feature details are as follows:

Enable email notifications to detect flow failures. To enable this feature, go to the Flow details page, and then, on the contextual menu (…), subscribe to receiving emails about flow failures. These useful email notifications provide:

  • Information about why your flow failed.

  • Meaningful remediation steps.

  • Additional resources to help you build robust flows that never fail.

There’s so much more about this release that you could talk for days about…

…but I would be unsure whether anyone would still be listening by the end! You can dive into the detail behind each of the above highlights and what else to expect in the next release by downloading the release notes yourself. Let me know in the comments below what you are looking forward to the most as part of the next release.

This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Custom Workflow Assembly. The video is part of my tutorial series on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:

Below you will find links to access some of the resources discussed as part of the video and to further reading topics:

PowerPoint Presentation (click here to download)

Full Code Sample

using System;
using System.Activities;

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Workflow;
using Microsoft.Xrm.Sdk.Query;

namespace D365.SampleCWA
{
    public class CWA_CopyQuote : CodeActivity
    {
        protected override void Execute(CodeActivityContext context)
        {
            IWorkflowContext c = context.GetExtension<IWorkflowContext>();

            IOrganizationServiceFactory serviceFactory = context.GetExtension<IOrganizationServiceFactory>();
            IOrganizationService service = serviceFactory.CreateOrganizationService(c.UserId);

            ITracingService tracing = context.GetExtension<ITracingService>();

            tracing.Trace("Tracing implemented successfully!", new Object());

            Guid quoteID = c.PrimaryEntityId;

            Entity quote = service.Retrieve("quote", quoteID, new ColumnSet("freightamount", "discountamount", "discountpercentage", "name", "pricelevelid", "customerid", "description"));

            quote.Id = Guid.Empty;
            quote.Attributes.Remove("quoteid");

            quote.Attributes["name"] = "Copy of " + quote.GetAttributeValue<string>("name");
            Guid newQuoteID = service.Create(quote);

            EntityCollection quoteProducts = RetrieveRelatedQuoteProducts(service, quoteID);
            EntityCollection notes = RetrieveRelatedNotes(service, quoteID);

            tracing.Trace(quoteProducts.TotalRecordCount.ToString() + " Quote Product records returned.", new Object());

            foreach (Entity product in quoteProducts.Entities)
            {
                product.Id = Guid.Empty;
                product.Attributes.Remove("quotedetailid");
                product.Attributes["quoteid"] = new EntityReference("quote", newQuoteID);
                service.Create(product);
            }
            foreach (Entity note in notes.Entities)
            {
                note.Id = Guid.Empty;
                note.Attributes.Remove("annotationid");
                note.Attributes["objectid"] = new EntityReference("quote", newQuoteID);
                service.Create(note);
            }
        }

        [Input("Quote Record to Copy")]
        [ReferenceTarget("quote")]

        public InArgument<EntityReference> QuoteReference { get; set; }
        private static EntityCollection RetrieveRelatedQuoteProducts(IOrganizationService service, Guid quoteID)
        {
            QueryExpression query = new QueryExpression("quotedetail");
            query.ColumnSet.AllColumns = true;
            query.Criteria.AddCondition("quoteid", ConditionOperator.Equal, quoteID);
            query.PageInfo.ReturnTotalRecordCount = true;

            return service.RetrieveMultiple(query);
        }
        private static EntityCollection RetrieveRelatedNotes(IOrganizationService service, Guid objectID)
        {
            QueryExpression query = new QueryExpression("annotation");
            query.ColumnSet.AllColumns = true;
            query.Criteria.AddCondition("objectid", ConditionOperator.Equal, objectID);
            query.PageInfo.ReturnTotalRecordCount = true;

            return service.RetrieveMultiple(query);
        }
    }
}

Download/Resource Links

Visual Studio 2017 Community Edition

Setup a free 30 day trial of Dynamics 365 Customer Engagement

C# Guide (Microsoft Docs)

Source Code Management Solutions

Further Reading

Microsoft Docs – Create a custom workflow activity

MSDN – Register and use a custom workflow activity assembly

MSDN – Update a custom workflow activity using assembly versioning (This topic wasn’t covered as part of the video, but I would recommend reading this article if you are developing an ISV solution involving custom workflow assemblies)

MSDN – Sample: Create a custom workflow activity

You can also check out some of my previous blog posts relating to Workflows:

  • Implementing Tracing in your CRM Plug-ins – We saw as part of the video how to utilise tracing, but this post goes into more detail about the subject, as well as providing instructions on how to enable the feature within the application (in case you are wondering why nothing is being written to the trace log 🙂 ). All code examples are for Plug-ins, but they can easily be repurposed to work with a custom workflow assembly instead.
  • Obtaining the User who executed a Workflow in Dynamics 365 for Customer Engagement (C# Workflow Activity) – You may have a requirement to trigger certain actions within the application, based on the user who executed a Workflow. This post walks through how to achieve this utilising a custom workflow assembly.

If you have found the above video useful and are itching to learn more about Dynamics 365 Customer Engagement development, then be sure to take a look at my previous videos/blog posts using the links below:

Have a question or an issue when working through the code samples? Be sure to leave a comment below or contact me directly, and I will do my best to help. Thanks for reading and watching!