As part of developing Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) plug-ins day in, day out, you can often forget about the Execution Mode setting. This can be evidenced by the fact that I make no mention of it in my recent tutorial video on plug-in development. In a nutshell, this setting enables you to customise whether your plug-in executes in Synchronous or Asynchronous mode. Now, you may be asking – just what the hell does that mean?!? The best way of understanding is by rephrasing the terminology; it basically tells the system when you want your code to be executed. Synchronous plug-ins execute all of your business logic whilst the record is being saved by the user, with this action not being considered complete and committed to the backend database until the plug-in completes. By comparison, Asynchronous plug-ins are queued for execution after the record has been saved. A System Job record is created and queued alongside other jobs in the system via the Asynchronous Service. Another way of remembering the difference between each one is to think back to the options available to you as part of a Workflow. They can either be executed in real time (synchronously) or in the background (asynchronously). Plug-ins are no different and give you the flexibility to ensure your business logic is applied immediately or, if especially complex, queued so that the system has sufficient time to process in the background.

I came across a strange issue with an arguably even stranger Synchronous plug-in the other day, which started failing after taking an inordinately long time saving the record:

Unexpected exception from plug-in (Execute): MyPlugin.MyPluginClass: System.AggregateException: One or more errors occurred.

The “strange” plug-in was designed so that, on the Create action of an Entity record, it goes out and creates various related records within the application, based on a set of conditions. We originally had issues with the plug-in a few months back erroring, due to the execution time exceeding the 2 minute limit for sandbox plug-ins. A rather ingenious and much more accomplished developer colleague got around the issue by implementing a degree of asynchronous processing within the plug-in, achieved like so:

await Task.Factory.StartNew(() =>
    lock (service)
        Stopwatch stopwatch = Stopwatch.StartNew();
        Guid record = service.Create(newRecord);
        tracing.Trace("Record with ID " + record.ToString() + " created successfully after: {0}ms.", stopwatch.ElapsedMilliseconds);

I still don’t fully understand just exactly what this is doing, but I put this down to my novice level C# knowledge 🙂 The important thing was that the code worked…until some additional processing was added to the plug-in, leading to the error message above.

At this juncture, our only choice was to look at forcing the plug-in to execute in Asynchronous mode by modifying the appropriate setting on the plug-in step within the Plugin Registration Tool:

After making this change and attempting to create the record again in the application, everything worked as expected. However, this did create a new problem for us to overcome – end users of the application were previously used to seeing the related records created by the plug-in within sub-grids on the Primary Entity form, which would then be accessed and worked through accordingly. As the very act of creating these records now took place within the background and took some time to complete, we needed to display an informative message to the user to advise them to refresh the form after a few minutes. You do have the ability within plug-ins to display a custom message back to the user, but this is only in situations where you are throwing an error message and it didn’t seem to be a particularly nice solution for this scenario.

In the end, the best way of achieving this requirement was to implement a JScript function on the form. This would trigger whenever the form is saved and displays a message box that the user has to click OK on before the save action is carried out:

function displaySaveMessage(context) {

    var eventArgs = context.getEventArgs();
    var saveMode = eventArgs.getSaveMode();

    if (saveMode == 70 || saveMode == 2 || saveMode == 1 || saveMode == 59) {
        var message = "Records will be populated in the background and you will need to refresh the form after a few minutes to see them on the Sub-Grid. Press OK to save the record."
        Xrm.Utility.alertDialog(message, function () {
   () {

By feeding through the execution context parameter, you are able to determine the type of save action that the alert will trigger on; in this case, SaveSave & CloseSave & New and Autosave. Just make sure you configure your script with the correct properties on the form, which are:

  • Using the OnSave event handler
  • With the Pass execution context as first parameter setting enabled

From the end-users perspective, they will see something similar to the below when the record is saved:

It’s a pity that we don’t have similar kind of functionality exposed via Business Rules that enable us to display OnSave alerts that are more in keeping with the applications look and feel. Nevertheless, the versatility of utilising JScript functions should be evident here and can often achieve these types of bespoke actions with a few lines of code.

When it comes to plug-in development, understanding the impact and processing time that your code has within the application is important for two reasons – first, in ensuring that end users are not frustrated by long loading times and, secondly, in informing the choice of Execution Mode when it comes to deploying out a plug-in. Whilst Asynchronous plug-ins can help to mitigate any user woes and present a natural choice when working with bulk operations within the application, make sure you fully understand the impact that these have on the Asynchronous Service and avoid a scenario where the System Job entity is queued with more jobs then it can handle.

If you are looking at automating the execution of SQL Server Integration Services .dtsx packages, then there are a few options at your disposal. The recommended and most streamlined route is to utilise the SSIDB catalog and deploy your packages to the catalog from within Visual Studio. This gives you additional flexibility if, when working with SQL Server 2016 or greater, on whether to deploy out single or multiple packages together. An alternative approach is to deploy your packages to the file system and then configure an Agent Job on SQL Server to execute the job based on a schedule and with runtime settings specified. This is as simple as selecting the appropriate dropdown option on the Agent Job Step configuration screen and setting the Package source value to File system:

Deploying out in this manner is useful if you are restricted from setting up the SSISDB catalog on your target SQL Server instance or if you are attempting to deploy packages to a separate Active Directory domain (I have encountered this problem previously, much to my chagrin). You also have the benefit of utilising other features available via the SQL Server Agent, such as e-mail notifications on fail/success of a job or conditional processing logic for job step(s). The in-built scheduling tool is also pretty powerful, enabling you to fine tune your automated package execution to any itinerary you could possibly conjure up.

I encountered a strange issue recently with a DTSX package configured via the SQL Agent. Quite randomly, the package suddenly started failing each time it was scheduled for execution, with the following error generated in the log files:

Failed to decrypt an encrypted XML node because the password was not specified or not correct. Package load will attempt to continue without the encrypted information.

The issue was a bit of a head-scratcher, with myself and a colleague trying the following steps in an attempt to fix the issue:

  • Forcing the package to execute manually generated the same error – this one was a bit of a longshot but worth trying anyway 🙂
  • When executing the package from within Visual Studio, no error was encountered and the package executed successfully.
  • After replacing the package on the target server with the package just executed on Visual Studio (same version) and manually executing it, once again the same error was thrown.

In the end, the issue was resolved by deleting the Agent Job and creating it again from scratch. Now, if you are diagnosing the same issue and are looking to perform these same steps, it may be best to use the Script Job as option within SQL Server Management Studio to save yourself from any potential headache when re-building your Job’s profile:

Then, for good measure, perform a test execution of the Job via the Start Job at Step… option to verify everything works.

I am still stumped at just what exactly went wrong here, but it is good to know that an adapted version of the ancient IT advice of yore can be referred back to…

This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Plug-in, the second in a series aiming to provide tutorials on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:

Below you will find links to access some of the resources discussed as part of the video and to further reading topics:

PowerPoint Presentation (click here to download)

Full Code Sample

using System;
using System.Globalization;

using Microsoft.Xrm.Sdk;

namespace D365.SamplePlugin
    public class PreContactCreate_FormatNameValues : IPlugin
        public void Execute(IServiceProvider serviceProvider)
            //Obtain the execution context from the service provider.

            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

            //Extract the tracing service for use in debugging sandboxed plug-ins

            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            tracingService.Trace("Tracing implemented successfully!");

            if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)

                Entity contact = (Entity)context.InputParameters["Target"];

                string firstName = contact.GetAttributeValue<string>("firstname");
                string lastName = contact.GetAttributeValue<string>("lastname");

                TextInfo culture = new CultureInfo("en-GB", false).TextInfo;

                if (firstName != null)

                    tracingService.Trace("First Name Before Value = " + firstName);
                    contact["firstname"] = culture.ToTitleCase(firstName.ToLower());
                    tracingService.Trace("First Name After Value = " + contact.GetAttributeValue<string>("firstname"));



                    tracingService.Trace("No value was provided for First Name field, skipping...");

                if (lastName != null)

                    tracingService.Trace("Last Name Before Value = " + lastName);
                    contact["lastname"] = culture.ToTitleCase(lastName.ToLower());
                    tracingService.Trace("Last Name After Value = " + contact.GetAttributeValue<string>("lastname"));


                    tracingService.Trace("No value was provided for Last Name field, skipping...");

                tracingService.Trace("PreContactCreate_FormatNameValues plugin execution complete.");


Download/Resource Links

Visual Studio 2017 Community Edition

Setup a free 30 day trial of Dynamics 365 Customer Engagement

C# Guide (Microsoft Docs)

Source Code Management Solutions

Further Reading

MSDN – Plug-in development

MSDN – Supported messages and entities for plug-ins

MSDN – Sample: Create a basic plug-in

MSDN – Debug a plug-in

I’ve written a number of blog posts around plug-ins previously, so here’s the obligatory plug section 🙂 :

Interested in learning more about JScript Form function development in Dynamics 365 Customer Engagement? Then check out my previous post for my video and notes on the subject. I hope you find these videos useful and do let me know if you have any comments or suggestions for future video content.

When working with Virtual Machines (VM’s) on Azure that have been deployed using a Linux Operating System (Ubuntu, Debian, Red Hat etc.), you would be forgiven for assuming that the experience would be rocky when compared with working with Windows VM’s. Far from it – you can expect an equivalent level of support for features such as full disk encryption, backup and administrator credentials management directly within the Azure portal, to name but a few. Granted, there may be some difference in the availability of certain extensions when compared with a Windows VM, but the experience on balance is comparable and makes the management of Linux VM’s a less onerous journey if your background is firmly rooted with Windows operating systems.

To ensure that the Azure platform can effectively do some of the tasks listed above, the Azure Linux Agent is deployed to all newly created VM’s. Written in Python and supporting a wide variety of common Linux distributable OS’s, I would recommend never removing this from your Linux VM; no matter how tempting this may be. The tradeoff could cause potentially debilitating consequences within Production environments. A good example of this would be if you ever needed to add an additional disk to your VM. This task would become impossible to achieve without the Agent present on your VM. As well as having the proper reverence for the Agent, it’s important to keep an eye on how the service is performing, even more so if you are using Recovery Services vaults to take snapshots of your VM and back up to another location. Otherwise, you may start to see errors like the one below being generated whenever a backup job attempts to complete:

It’s highly possible that many administrators are seeing this error at the time of writing this post (January 2018). The recent disclosures around Intel CPU vulnerabilities have prompted many cloud vendors, including Microsoft, to roll out emergency patches across their entire data centres to address the underlying vulnerability. Whilst it is commendable that cloud vendors have acted promptly to address this flaw, the pace of the work and the dizzying array of resources affected has doubtless led to some issues that could not have been foreseen from the outset. I believe, based on some of the available evidence (more on this later), that one of the unintended consequences of this work is the rise of the above error message with the Azure Linux Agent.

When attempting to deal with this error myself, there were a few different steps I had to try before the backup started working correctly. I have collected all of these below and, if you find yourself in the same boat, one or all of them should resolve the issue for you.

First, ensure that you are on the latest version of the Agent

This one probably goes without saying as it’s generally the most common answer to any error 🙂 However, the steps for deploying updates out onto a Linux machine can be more complex, particularly if your primary method of accessing the machine is via an SSH terminal. The process for updating your Agent depends on your Linux version – this article provides instructions for the most common variants. In addition, it is highly recommended that the auto-update feature is enabled, as described in the article.

Verify that the Agent service starts successfully.

It’s possible that the service is not running on your Linux VM. This can be confirmed by executing the following command:

service walinuxagent start

Be sure to have your administrator credentials handy, as you will need to authenticate to successfully start the service. You can then check that the service is running by executing the ps -e command.

Clear out the Agent cache files

The Agent stores a lot of XML cache files within the /var/lib/waagent/ folder, which can sometimes cause issues with the Agent. Microsoft has specifically recommended that the following command is executed on your machine after the 4th January if you are experiencing issues:

sudo rm -f /var/lib/waagent/*.[0-9]*.xml

The command will delete all “old” XML files within the above folder. A restart of the service should not be required. The date mentioned above links back to the theory suggested earlier in this post that the Intel chipset issues and this error message are linked in some way, as the dates seem to tie with when news first broke regarding the vulnerability.

If you are still having problems

Read through the entirety of this article, and try all of the steps suggested – including digging around in the log files, if required. If all else fails, open a support request directly with Microsoft.

Hopefully, by following the above steps, your backups are now working again without issue 🙂

This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function, the first in a series that aims to provide tutorials on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:

Below you will find links to access some of the resources discussed as part of the video and to further reading topics.

PowerPoint Presentation (click here to download)

Full Code Sample

function changeAddressLabels() {

    //Get the control for the composite address field and then set the label to the correct, Anglicised form. Each line requires the current control name for 'getControl' and then the updated label name for 'setLabel'

    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line1").setLabel("Address 1");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line2").setLabel("Address 2");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line3").setLabel("Address 3");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_postalcode").setLabel("Postal Code");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line1"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line1").setLabel("Address 1");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line2"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line2").setLabel("Address 2");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line3"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line3").setLabel("Address 3");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_city"))

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_stateorprovince"))

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_postalcode"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_postalcode").setLabel("Postal Code");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_country"))

Download/Resource Links

Visual Studio 2017 Community Edition

Setup a free 30 day trial of Dynamics 365 Customer Engagement

W3 Schools JavaScript Tutorials

Source Code Management Solutions

Further Reading

MSDN – Use JavaScript with Microsoft Dynamics 365

MSDN – Use the Xrm.Page. object model

MSDN – Xrm.Page.ui control object

MSDN – Overview of Web Resources

Debugging custom JavaScript code in CRM using browser developer tools (steps are for Dynamics CRM 2016, but still apply for Dynamics 365 Customer Engagement)

Have any thoughts or comments on the video? I would love to hear from you! I’m also keen to hear any ideas for future video content as well. Let me know by leaving a comment below or in the video above.