A vital part of any DevOps automation activity is to facilitate automatic builds of code projects on regular cycles. In larger teams, this becomes particularly desirable for a multitude of reasons:

  • Provides a means of ensuring that builds do not contain any glaring code errors that prevent a successful compile from taking place.
  • Enables builds to be triggered in a central, “master” location, that all developers are regularly shipping code to.
  • When incorporated as part of other steps during the build stage, other automation tasks can be bolted on straightforwardly – such as the running of Unit Tests and deployment of resources to development environment(s).

The great news is that, when working with either Visual Studio Team Services or Team Foundation Server (VSTS/TFS), the process of setting up the first automated build definition of your project is straightforward. All steps can be completed directly within the GUI interface and – although some of the detailed configuration settings can be a little daunting when reviewing them for the first time – the drag and drop interface means that you can quickly build consistent definitions that are easy to understand at a glance.

One such detailed configuration setting relates to your choice of Build and Release Agent. To provide the ability to carry out automated builds (and releases), VSTS/TFS requires a dedicated agent machine designated that can be used to execute all required tasks on. There are two flavours of Build and Release Agents:

  • Microsoft Hosted: Fairly self-explanatory, this is the most logical choice if your requirements are fairly standard – for example, a simple build/release definition for an MVC ASP.NET application. Microsoft provides a range of different Build and Release Agents, covering different operating system vendors and versions of Visual Studio.
  • Self-Hosted: In some cases, you may require access to highly bespoke modules or third-party applications to ensure that your build/release definitions complete successfully. A good example may be a non-Microsoft PowerShell cmdlet library. Or, it could be that you have strict business requirements around the storage of deployment resources. This is where Self-Hosted agents come into play. By installing the latest version of the Agent onto a computer of your choice – Windows, macOS or Linux – you can then use this machine as part of your builds/releases within both VSTS & TFS. You can also take this further by setting up as many different Agent machines as you like and then group these into a “pool”, thereby allowing concurrent jobs and enhanced scalability.

The necessary trade-off when using Self-Hosted agents is that you must manage and maintain the machine yourself – for example, you will need to install a valid version of Visual Studio and SQL Server Data Tools if you wish to build SQL Server Database projects. What’s more, if issues start to occur, you are on your own (to a large extent) when diagnosing and resolving them. One such problem you may find is with permissions on the build agent, with variants of the following error that may crop up from time to time during your builds:

The error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the VSTS directory. When VSTS / TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed. I have observed that, even if the execution account on the Build Agent machine has sufficient privileges to overwrite files in the directory, you will still run into this issue. The best resolution I have found – in all cases to date – is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\VSTS\_work\SourceRootMapping\5dc5253c-785c-4de1-b722-e936d359879c\13\SourceFolder.json) and delete the file/folder in question. Removing the offending items will effectively “clean slate” the next Build definition execution, which should then complete without issue.

We are regularly told these days of the numerous benefits that “going serverless” can bring to the table, including, but not limited to, reduced management overhead, reduced cost of ownership and faster adoption of newer technology. The great challenge with all of this is that, because no two businesses are typically the same, there is often a requirement to boot up a Virtual Machine and run a specified app within a full server environment, so that we can achieve the level of required functionality to suit our business scenario. Self-Hosted agents are an excellent example of this concept in practice, and one that is hard to prevent from being regularly utilised, irrespective of how vexatious this may make us. While the ability to use Microsoft Hosted Build and Release Agents is greatly welcome (especially given there is no cost involved), it would be nice to see if this could be “opened up” to allow additional Agent machine tailoring for specific situations. I’m not going to hold my breath in this regard though – if I were in Microsoft’s shoes, I would shudder at the thought of allowing complete strangers the ability to deploy and run custom libraries on my critical LOB application. It’s probably asking for more trouble than the convenience it would provide 🙂

Typically, when working with Dynamics 365 for Customer Service entities, you expect a certain type of behaviour. A good example of this in practice is entity record activation and the differences between Active and Inactive record types. In simple terms, you are generally restricted in the actions that can be performed against an Inactive record, most commonly being the modification of a field value. You can, however, perform actions such as deleting and reassigning records to other users in the application. The latter of these can be particularly useful if, for example, an individual leaves a business and you need to ensure that another employee requires access to old, inactive records.

When it comes to reporting on time intervals for when a record was last changed, I often – rightly or wrongly – see the Modified On field used for this purpose. This, essentially, stores the date and time of when the record was…well…last modified in the system! Since only more recent changes to the application have facilitated an alternative approach in reporting a record’s age and its current stage within a process, it is perhaps understandable why this field is often chosen when attempting to report, for example, the date on which a record was moved to Inactive status. Where you may encounter issues with this is if you a working in a similar situation highlighted above – namely, an individual leaving a business – and you need to reassign all of the inactive records owned by them. Doing either of these steps will immediately update the Modified On value to the current date and time, skewering any dependent reporting.

Fortunately, there is a way of getting around this, if you don’t have any qualms about opening up Visual Studio and putting together a plug-in in C#. Via this route, you can prevent the Modified On value of a record from being updated by capturing the original value and forcing the platform to commit this value to the database during the Pre-Operation stage of the transaction, as opposed to the date and time of when the record is reassigned. In fact, using this method, you can set the Modified On value to be whatever you want. Here’s the code that illustrates how to achieve both scenarios:

using System;

using Microsoft.Xrm.Sdk;

namespace Sample.OverrideCaseModifiedDate
{
    public class PreOpCaseAssignOverrideModifiedDate : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            //Obtain the execution context from the service provider.

            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

            //Extract the tracing service for use in debugging sandboxed plug-ins

            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            tracingService.Trace("Tracing implemented successfully!");

            if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)

            {
                Entity incident = (Entity)context.InputParameters["Target"];
                Entity preIncident = context.PreEntityImages["preincident"];

                //At this stage, you can either get the previous Modified On date value via the Pre Entity Image and set the value accordingly...

                incident["modifiedon"] = preIncident.GetAttributeValue<DateTime>("modifiedon");

                //Or alternatively, set it to whatever you want - in this example, we get the Pre Entity Image createdon value, add on an hour and then set the value

                DateTime createdOn = preIncident.GetAttributeValue<DateTime>("createdon");
                TimeSpan time = new TimeSpan(1, 0, 0);
                DateTime newModifiedOn = createdOn.Add(time);
                incident["modifiedon"] = newModifiedOn;

            }
        }
    }
}

When deploying out your plug-in code to the application (something that I hope regular readers of the blog will be familiar with), make sure that the settings you configure for your Step and the all-important Entity image resemble the images below:

Now, the more eagle-eyed readers may notice that the step is configured on the Update as opposed to the Assign message, which is what you (and, indeed, I when I first started out with this) may expect. Unfortunately, because the Input Parameters of the Assign message only returns two Entity Reference objects – the incident (Case) and systemuser (User) entities, respectively – as opposed to an Entity object for the incident entity, we have no means of interfering with the underlying database transaction to override the required field values. The Update message does not suffer from this issue and, by scoping the plug-in’s execution to the ownerid field only, we can ensure that it will only ever trigger when a record is reassigned.

With the above plug-in configured, you have the flexibility of re-assigning your Case records without updating the Modified On field value in the process or expanding this further to suit whatever business requirement is needed. In theory, as well, the approach used in this example could also be applied to other what we may term “system defined fields”, such as the Created On field. Hopefully, this post may prove some assistance if you find yourself having to tinker around with inactive Case records in the future.

This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Custom Workflow Assembly. The video is part of my tutorial series on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:

Below you will find links to access some of the resources discussed as part of the video and to further reading topics:

PowerPoint Presentation (click here to download)

Full Code Sample

using System;
using System.Activities;

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Workflow;
using Microsoft.Xrm.Sdk.Query;

namespace D365.SampleCWA
{
    public class CWA_CopyQuote : CodeActivity
    {
        protected override void Execute(CodeActivityContext context)
        {
            IWorkflowContext c = context.GetExtension<IWorkflowContext>();

            IOrganizationServiceFactory serviceFactory = context.GetExtension<IOrganizationServiceFactory>();
            IOrganizationService service = serviceFactory.CreateOrganizationService(c.UserId);

            ITracingService tracing = context.GetExtension<ITracingService>();

            tracing.Trace("Tracing implemented successfully!", new Object());

            Guid quoteID = c.PrimaryEntityId;

            Entity quote = service.Retrieve("quote", quoteID, new ColumnSet("freightamount", "discountamount", "discountpercentage", "name", "pricelevelid", "customerid", "description"));

            quote.Id = Guid.Empty;
            quote.Attributes.Remove("quoteid");

            quote.Attributes["name"] = "Copy of " + quote.GetAttributeValue<string>("name");
            Guid newQuoteID = service.Create(quote);

            EntityCollection quoteProducts = RetrieveRelatedQuoteProducts(service, quoteID);
            EntityCollection notes = RetrieveRelatedNotes(service, quoteID);

            tracing.Trace(quoteProducts.TotalRecordCount.ToString() + " Quote Product records returned.", new Object());

            foreach (Entity product in quoteProducts.Entities)
            {
                product.Id = Guid.Empty;
                product.Attributes.Remove("quotedetailid");
                product.Attributes["quoteid"] = new EntityReference("quote", newQuoteID);
                service.Create(product);
            }
            foreach (Entity note in notes.Entities)
            {
                note.Id = Guid.Empty;
                note.Attributes.Remove("annotationid");
                note.Attributes["objectid"] = new EntityReference("quote", newQuoteID);
                service.Create(note);
            }
        }

        [Input("Quote Record to Copy")]
        [ReferenceTarget("quote")]

        public InArgument<EntityReference> QuoteReference { get; set; }
        private static EntityCollection RetrieveRelatedQuoteProducts(IOrganizationService service, Guid quoteID)
        {
            QueryExpression query = new QueryExpression("quotedetail");
            query.ColumnSet.AllColumns = true;
            query.Criteria.AddCondition("quoteid", ConditionOperator.Equal, quoteID);
            query.PageInfo.ReturnTotalRecordCount = true;

            return service.RetrieveMultiple(query);
        }
        private static EntityCollection RetrieveRelatedNotes(IOrganizationService service, Guid objectID)
        {
            QueryExpression query = new QueryExpression("annotation");
            query.ColumnSet.AllColumns = true;
            query.Criteria.AddCondition("objectid", ConditionOperator.Equal, objectID);
            query.PageInfo.ReturnTotalRecordCount = true;

            return service.RetrieveMultiple(query);
        }
    }
}

Download/Resource Links

Visual Studio 2017 Community Edition

Setup a free 30 day trial of Dynamics 365 Customer Engagement

C# Guide (Microsoft Docs)

Source Code Management Solutions

Further Reading

Microsoft Docs – Create a custom workflow activity

MSDN – Register and use a custom workflow activity assembly

MSDN – Update a custom workflow activity using assembly versioning (This topic wasn’t covered as part of the video, but I would recommend reading this article if you are developing an ISV solution involving custom workflow assemblies)

MSDN – Sample: Create a custom workflow activity

You can also check out some of my previous blog posts relating to Workflows:

  • Implementing Tracing in your CRM Plug-ins – We saw as part of the video how to utilise tracing, but this post goes into more detail about the subject, as well as providing instructions on how to enable the feature within the application (in case you are wondering why nothing is being written to the trace log 🙂 ). All code examples are for Plug-ins, but they can easily be repurposed to work with a custom workflow assembly instead.
  • Obtaining the User who executed a Workflow in Dynamics 365 for Customer Engagement (C# Workflow Activity) – You may have a requirement to trigger certain actions within the application, based on the user who executed a Workflow. This post walks through how to achieve this utilising a custom workflow assembly.

If you have found the above video useful and are itching to learn more about Dynamics 365 Customer Engagement development, then be sure to take a look at my previous videos/blog posts using the links below:

Have a question or an issue when working through the code samples? Be sure to leave a comment below or contact me directly, and I will do my best to help. Thanks for reading and watching!

When working with web applications and Azure App Service, it may sometimes be necessary to delete or remove files from a website. Whether it is a deprecated feature or a bit of development “junk” that was accidentally left on your website, these files can often introduce processing overhead or even security vulnerabilities if left unattended. It is, therefore, good practice to ensure that these are removed during a deployment. Fortunately, this is where tools such as Web Deploy really come into their element. When using this from within Visual Studio via the Publish dialog box, you can specify to remove any file that does not exist within your Project via the File Publish Options section on the Settings tab:

There is also the Exclude files from the App_Data folder setting, which has a bearing on how the above operates, but we’ll come back to that shortly…

Whilst this feature is useful if you deploying out to a dev/test environment manually from Visual Studio, it is less so if you have implemented an automated release strategy via a tool such as Visual Studio Team Services (the cloud version of Team Foundation Services). This is where all steps as part of a deployment are programmatically defined and then re-used whenever a new release to an environment needs to be performed. Typically, this may involve some coding as part of a PowerShell script or similar, but what makes Team Services really effective is the ability to “drag and drop” common deployment actions that cover most release scenarios. Case in point – a specific task is provided out of the box to handle deployments to Azure App Service:

What’s even better is the fact that we have the same option at our disposal à la Visual Studio – although you would be forgiven for overlooking it given how neatly the settings are tucked away 🙂

Note that you have to specifically enable the option to Publish using Web Deploy for the Remove additional files at destination option to appear. It’s important, therefore, that you fully understand how Web Deploy works in comparison to other options, such as FTP deploy. I will think you will find, though, that the list of benefits far outweighs any negatives. In fact, the only drawback of using this option is that you must be using Microsoft “approved” tools, such as Visual Studio, to facilitate.

We saw earlier in this post the option for Exclude files from the App_Data folder setting. Typically, you may be using this folder as some form of local file store for configuration data or similar. It is also the location where any WebJobs configured for your website would be stored. With the Exclude files from the App_Data folder setting enabled, you may assume that Web Deploy will indiscriminately delete all files residing within the App_Data directory. Luckily, the automated task instead throws an error if it detects any files within the directory that may be affected by the delete operation:

Helpful in the sense that the task is not deleting files which could be potentially important, frustrating in that the deployment will not complete successfully! As you may have already guessed, enabling the Exclude files from the App_Data folder setting in Visual Studio/on the Team Services task gets around this issue:

You can then sit back and verify as part of the Team Services Logs that any file not in your source project is deleted successfully during deployment:

Manual deployments to Production systems can be fraught with countless hidden dangers – the potential for an accidental action chief among them, but also the risk of outdated components of a system not being flagged up and removed as part of a release cycle. Automating deployments go a long way in taking human error out of this equation and, with the inclusion of this handy feature to remove files not within your source code during the deployment, also negates the need for any manual intervention after a deployment to rectify any potential issues. If you are still toying with introducing fully automated deployments within your environment, then I would urge wholeheartedly to commit the effort towards achieving this outcome. Get in touch if you need any help with this, and I would be happy to lend some assistance 🙂

This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function, the first in a series that aims to provide tutorials on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:

Below you will find links to access some of the resources discussed as part of the video and to further reading topics.

PowerPoint Presentation (click here to download)

Full Code Sample

function changeAddressLabels() {

    //Get the control for the composite address field and then set the label to the correct, Anglicised form. Each line requires the current control name for 'getControl' and then the updated label name for 'setLabel'

    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line1").setLabel("Address 1");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line2").setLabel("Address 2");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_line3").setLabel("Address 3");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_city").setLabel("Town");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_stateorprovince").setLabel("County");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_postalcode").setLabel("Postal Code");
    Xrm.Page.getControl("address1_composite_compositionLinkControl_address1_country").setLabel("Country");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line1"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line1").setLabel("Address 1");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line2"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line2").setLabel("Address 2");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line3"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_line3").setLabel("Address 3");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_city"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_city").setLabel("Town");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_stateorprovince"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_stateorprovince").setLabel("County");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_postalcode"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_postalcode").setLabel("Postal Code");

    if (Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_country"))
        Xrm.Page.getControl("address2_composite_compositionLinkControl_address2_country").setLabel("Country");
}

Download/Resource Links

Visual Studio 2017 Community Edition

Setup a free 30 day trial of Dynamics 365 Customer Engagement

W3 Schools JavaScript Tutorials

Source Code Management Solutions

Further Reading

MSDN – Use JavaScript with Microsoft Dynamics 365

MSDN – Use the Xrm.Page. object model

MSDN – Xrm.Page.ui control object

MSDN – Overview of Web Resources

Debugging custom JavaScript code in CRM using browser developer tools (steps are for Dynamics CRM 2016, but still apply for Dynamics 365 Customer Engagement)

Have any thoughts or comments on the video? I would love to hear from you! I’m also keen to hear any ideas for future video content as well. Let me know by leaving a comment below or in the video above.