Slight change of pace with this week’s blog post, which will be a fairly condensed and self-indulgent affair – due to personal circumstances, I have been waylaid somewhat when it comes to producing content for the blog and I have also been unable to make any further progress with my new YouTube video series. Hoping that normal service will resume shortly, meaning additional videos and more content-rich blog posts, so stay tuned.

I’ve been running the CRM Chap blog for just over 2 years now. Over this time, I have been humbled and proud to have received numerous visitors to the site, some of whom have been kind enough to provide feedback or to share some of their Dynamics CRM/365 predicaments with me. Having reached such a landmark now seems to be good a time as any to take a look back on the posts that have received the most attention and to, potentially, give those who missed them the opportunity to read them. In descending order, here is the list of the most viewed posts to date on the crmchap.co.uk website:

  1. Utilising SQL Server Stored Procedures with Power BI
  2. Installing Dynamics CRM 2016 SP1 On-Premise
  3. Power BI Deep Dive: Using the Web API to Query Dynamics CRM/365 for Enterprise
  4. Utilising Pre/Post Entity Images in a Dynamics CRM Plugin
  5. Modifying System/Custom Views FetchXML Query in Dynamics CRM
  6. Grant Send on Behalf Permissions for Shared Mailbox (Exchange Online)
  7. Getting Started with Portal Theming (ADXStudio/CRM Portals)
  8. Microsoft Dynamics 365 Data Export Service Review
  9. What’s New in the Dynamics 365 Developer Toolkit
  10. Implementing Tracing in your CRM Plug-ins

I suppose it is a testament to the blog’s stated purpose that posts covering areas not exclusive to Dynamics CRM/365 rank so highly on the list and, indeed, represents how this application is so deeply intertwined with other technology areas within the Microsoft “stack”.

To all new and long-standing followers of the blog, thank you for your continued support and appreciation for the content 🙂

As part of developing Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) plug-ins day in, day out, you can often forget about the Execution Mode setting. This can be evidenced by the fact that I make no mention of it in my recent tutorial video on plug-in development. In a nutshell, this setting enables you to customise whether your plug-in executes in Synchronous or Asynchronous mode. Now, you may be asking – just what the hell does that mean?!? The best way of understanding is by rephrasing the terminology; it basically tells the system when you want your code to be executed. Synchronous plug-ins execute all of your business logic whilst the record is being saved by the user, with this action not being considered complete and committed to the backend database until the plug-in completes. By comparison, Asynchronous plug-ins are queued for execution after the record has been saved. A System Job record is created and queued alongside other jobs in the system via the Asynchronous Service. Another way of remembering the difference between each one is to think back to the options available to you as part of a Workflow. They can either be executed in real time (synchronously) or in the background (asynchronously). Plug-ins are no different and give you the flexibility to ensure your business logic is applied immediately or, if especially complex, queued so that the system has sufficient time to process in the background.

I came across a strange issue with an arguably even stranger Synchronous plug-in the other day, which started failing after taking an inordinately long time saving the record:

Unexpected exception from plug-in (Execute): MyPlugin.MyPluginClass: System.AggregateException: One or more errors occurred.

The “strange” plug-in was designed so that, on the Create action of an Entity record, it goes out and creates various related records within the application, based on a set of conditions. We originally had issues with the plug-in a few months back erroring, due to the execution time exceeding the 2 minute limit for sandbox plug-ins. A rather ingenious and much more accomplished developer colleague got around the issue by implementing a degree of asynchronous processing within the plug-in, achieved like so:

await Task.Factory.StartNew(() =>
{
    lock (service)
    {
        Stopwatch stopwatch = Stopwatch.StartNew();
        Guid record = service.Create(newRecord);
        tracing.Trace("Record with ID " + record.ToString() + " created successfully after: {0}ms.", stopwatch.ElapsedMilliseconds);
    }
});

I still don’t fully understand just exactly what this is doing, but I put this down to my novice level C# knowledge 🙂 The important thing was that the code worked…until some additional processing was added to the plug-in, leading to the error message above.

At this juncture, our only choice was to look at forcing the plug-in to execute in Asynchronous mode by modifying the appropriate setting on the plug-in step within the Plugin Registration Tool:

After making this change and attempting to create the record again in the application, everything worked as expected. However, this did create a new problem for us to overcome – end users of the application were previously used to seeing the related records created by the plug-in within sub-grids on the Primary Entity form, which would then be accessed and worked through accordingly. As the very act of creating these records now took place within the background and took some time to complete, we needed to display an informative message to the user to advise them to refresh the form after a few minutes. You do have the ability within plug-ins to display a custom message back to the user, but this is only in situations where you are throwing an error message and it didn’t seem to be a particularly nice solution for this scenario.

In the end, the best way of achieving this requirement was to implement a JScript function on the form. This would trigger whenever the form is saved and displays a message box that the user has to click OK on before the save action is carried out:

function displaySaveMessage(context) {

    var eventArgs = context.getEventArgs();
    var saveMode = eventArgs.getSaveMode();

    if (saveMode == 70 || saveMode == 2 || saveMode == 1 || saveMode == 59) {
        var message = "Records will be populated in the background and you will need to refresh the form after a few minutes to see them on the Sub-Grid. Press OK to save the record."
        Xrm.Utility.alertDialog(message, function () {
            Xrm.Page.data.save().then(function () {
                Xrm.Page.data.refresh();
            })
        });
    }
}

By feeding through the execution context parameter, you are able to determine the type of save action that the alert will trigger on; in this case, SaveSave & CloseSave & New and Autosave. Just make sure you configure your script with the correct properties on the form, which are:

  • Using the OnSave event handler
  • With the Pass execution context as first parameter setting enabled

From the end-users perspective, they will see something similar to the below when the record is saved:

It’s a pity that we don’t have similar kind of functionality exposed via Business Rules that enable us to display OnSave alerts that are more in keeping with the applications look and feel. Nevertheless, the versatility of utilising JScript functions should be evident here and can often achieve these types of bespoke actions with a few lines of code.

When it comes to plug-in development, understanding the impact and processing time that your code has within the application is important for two reasons – first, in ensuring that end users are not frustrated by long loading times and, secondly, in informing the choice of Execution Mode when it comes to deploying out a plug-in. Whilst Asynchronous plug-ins can help to mitigate any user woes and present a natural choice when working with bulk operations within the application, make sure you fully understand the impact that these have on the Asynchronous Service and avoid a scenario where the System Job entity is queued with more jobs then it can handle.

If you are looking at automating the execution of SQL Server Integration Services .dtsx packages, then there are a few options at your disposal. The recommended and most streamlined route is to utilise the SSIDB catalog and deploy your packages to the catalog from within Visual Studio. This gives you additional flexibility if, when working with SQL Server 2016 or greater, on whether to deploy out single or multiple packages together. An alternative approach is to deploy your packages to the file system and then configure an Agent Job on SQL Server to execute the job based on a schedule and with runtime settings specified. This is as simple as selecting the appropriate dropdown option on the Agent Job Step configuration screen and setting the Package source value to File system:

Deploying out in this manner is useful if you are restricted from setting up the SSISDB catalog on your target SQL Server instance or if you are attempting to deploy packages to a separate Active Directory domain (I have encountered this problem previously, much to my chagrin). You also have the benefit of utilising other features available via the SQL Server Agent, such as e-mail notifications on fail/success of a job or conditional processing logic for job step(s). The in-built scheduling tool is also pretty powerful, enabling you to fine tune your automated package execution to any itinerary you could possibly conjure up.

I encountered a strange issue recently with a DTSX package configured via the SQL Agent. Quite randomly, the package suddenly started failing each time it was scheduled for execution, with the following error generated in the log files:

Failed to decrypt an encrypted XML node because the password was not specified or not correct. Package load will attempt to continue without the encrypted information.

The issue was a bit of a head-scratcher, with myself and a colleague trying the following steps in an attempt to fix the issue:

  • Forcing the package to execute manually generated the same error – this one was a bit of a longshot but worth trying anyway 🙂
  • When executing the package from within Visual Studio, no error was encountered and the package executed successfully.
  • After replacing the package on the target server with the package just executed on Visual Studio (same version) and manually executing it, once again the same error was thrown.

In the end, the issue was resolved by deleting the Agent Job and creating it again from scratch. Now, if you are diagnosing the same issue and are looking to perform these same steps, it may be best to use the Script Job as option within SQL Server Management Studio to save yourself from any potential headache when re-building your Job’s profile:

Then, for good measure, perform a test execution of the Job via the Start Job at Step… option to verify everything works.

I am still stumped at just what exactly went wrong here, but it is good to know that an adapted version of the ancient IT advice of yore can be referred back to…