Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂

Automated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:

  • Removes human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes.
  • Allows for clearly defined, auditable approval cycles for release approvals.
  • Provides developers with the flexibility for structuring deployments based on any predefined number of release environments.

You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:

This template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot – with no coding required at any stage:

There may also be some additional options that need configuring as part of the Azure App Service Deploy task:

  • Publish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API.
  • Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project.
  • Exclude files from the App_Data folder: Depending on what your web application is doing, it’s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this.
  • Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages.

This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:

 

Note also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.

To get around this problem, we must address the issue that the Error Code points us toward – namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:

  • Create a staging slot on Azure for your App Service, with a name of your choosing
  • Structure a deployment task that carries out the following steps, in order:
    • Stop the staging slot on your chosen App Service instance.
    • Deploy your web application to the staging slot.
    • Start the staging slot on your chosen App Service instance.
    • (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.)
    • Perform a Swap Slot, promoting the staging slot to your live, production instance.

The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:

Even with this in place, I would still recommend that you look at taking your web application “offline” in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.

Setting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement – merely some drag & drop wizardry and some fiddling about with deployment task settings. I don’t know about you, but I am pretty happy with that state of affairs. 🙂

The importance of segregated deployment environments for any IT application cannot be understated. Even if this only comprises of a single test/User Acceptance Testing (UAT) environment, there are a plethora of benefits involved, which should easily outweigh any administrative or time effort involved:

  • They provide a safe “sandbox” for any functionality or developmental changes to be carried in sequence and verify the intended outcome.
  • They enable administrators to carry out the above within regular business hours, without risk of disruption to live data or processes.
  • For systems that experience frequent updates/upgrades, these types of environments allow for major releases to be tested in advance of a production environment upgrade.

When it comes Dynamics 365 Customer Engagement (D365CE), Microsoft more recently have realised the importance that dedicated test environments have for their online customers. That’s why, with the changes announced during the transition away from Dynamics CRM, a free Sandbox instance was granted with every subscription. The cost involved in provisioning these can start to add up significantly over time, so this change was and remains most welcome; mainly when it means that any excuse to carrying out the bullet point highlighted above in bold immediately gets thrown off the table.

Anyone who is presently working in the D365CE space will be intently aware of the impending forced upgrade to version 9 of the application that must take place within the next few months. Although this will doubtless cause problems for some organisations, it is understandable why Microsoft is enforcing this so stringently and – if managed correctly – allows for a more seamless update experience in the future, getting new features into the hands of customers much faster. This change can only be a good thing, as I have argued previously on the blog. As a consequence, many businesses may currently have a version 9 Sandbox instance within their Office 365 tenant which they are using to carry out the types of tests I have referenced already, which typically involves testing custom developed or ISV Managed/Unmanaged Solutions. One immediate issue you may find if you are working with solutions containing the Marketing List (list) entity is that your Solution suddenly stops importing successfully, with the following error messages:

The error message on the solution import is a little vague…

…and the brain starts hurting when we look at the underlying error text!

The problem relates to changes to the Marketing List entity in version 9 of the application, specifically the Entity property IsVisibleInMobileClient. This assertion is confirmed by comparing the Value and CanBeChanged properties of these settings in both an 8.2 and 9.0 instance, using the Metadata Browser tool included in the SDK:

Here’s the setting in Version 8.2…

…and how it appears now in Version 9.0

Microsoft has published a support article which lists the steps involved to get this resolved for every new solution file that you find yourself having to ship between 8.2 and 9.0 environments. Be careful when following the suggested workaround steps, as modifications to the Solution file should always be considered a last resort means of fixing issues and can end up causing you no end of hassle if applied incorrectly. There is also no way of fixing this issue at source as well, all thanks to the CanBeChanged property being set to false (this may be a possible route of resolution if you are having this same problem with another Entity that has this value set to true, such as a custom entity). Although Microsoft promises a fix for this issue, I wouldn’t necessarily expect that 8.2 environments will be specially patched to resolve this particular problem. Instead, I would take the fix to mean the forced upgrade of all 8.2 environments to version 9.0/9.1 within the next few months. Indeed, this would seem to be the most business-sensible decision rather than prioritising a specific fix for an issue that is only going to affect a small number of deployments.

Earlier this month, a colleague escalated an issue to me involving Dynamics CRM/365 Customer Engagement (CRM/D365CE), specifically relating to email tracking. This feature is by far one of the most useful and unwieldy within the application, if not configured correctly. In days of yore, the setup steps involved could be tedious to implement, mainly if you were operating within the confines of a hybrid environment (for example, CRM 2015 on-premises and Exchange Server Online). Or, you could have been one of the handful of unfortunate individuals on the planet today that had to rely on the abomination that is the Email Router. We can be thankful today that Server-Side Synchronization is the sole method for pulling in emails from any manner of SMTP or POP3 mail servers; although note that only Exchange mailboxes support Appointment, Contact & Task synchronisation. Lucky though we are to be living in more enlightened times, careful attention and management of Server-Side Synchronization deployments is still an ongoing requirement. This is primarily to ensure all mailboxes operate as intended and – most critically – to ensure that only the most relevant emails are tagged back into the application, and not instead a deluge of unrelated correspondence.

Going back to the issue mentioned at the start of this post – a user in question was having a problem with certain emails not synchronising automatically back into the application, even though the emails in question had a corresponding Contact record within CRM/D365CE. We were also able to observe that other emails sent from the user to the Contact record(s) in question were being tagged back without issue. When first diagnosing problems like this, you can forgive yourself for not straight away making a beeline to the user’s Mailbox record within the application to verify that:

  • The Mailbox is enabled for Server-Side Synchronization for Incoming/Outgoing Email.
  • No processing errors are occurring that could be preventing emails from being successfully handled by the application.

These options can be accessed from the System Settings area of the application, on the Email tab, and define the default settings for all newly created users.

Likewise, these details can are accessible from the Mailbox record for the user concerned.

Although not likely (more often than not) to be the cause of any mail flow issues, it is worthwhile not to potentially overcomplicate a technical issue at the first juncture by overlooking anything obvious. 🙂

As we can see in this example, there are no problems with the over-arching Server-Side Synchronization configuration, nor are there any problems with the individual mailbox. It is at this point that we must refer to the following screen that all users in the application have access to via the gear icon at the top of the screen – the User Options screen:

The Track option allows users to specify how CRM/D365CE handles automatic email tracking, based on four options:

  • All Email Messages: Does exactly what it says on the tin, and is not recommended to leave on as default, for the reasons I alluded to earlier.
  • Email messages in response to Dynamics 365 Email: Only emails sent from within Dynamics 365 (or tracked accordingly via Outlook) will be stored in the application, alongside any replies that are received.
  • Email messages from Dynamics 365 Leads, Contacts and Accounts: Only emails which match back to the record types listed, based on email address, will be stored within the application.
  • Email messages from Dynamics 365 records that are email enabled: The same as the previous option, but expanded out to include all record types that are configured with the Sending email… option on the Entity configuration page.

For the user who was having email tracking issues, the default setting specified was Email messages in response to Dynamics 365 Email. So, to resolve the issue, it is necessary for the user to update their settings to either the 3rd or 4th option.

Any situation that involves detailed, technical configuration by end-users are generally ones that I like to avoid – for a few simple, business-relevant reasons:

  • IT/Technical teams should be the ones making configuration changes to applications, not end users who have not had training or experience on the steps they are being asked to follow.
  • End-users are busy, and it is always essential that we are conscious of their time and in making any interaction short and positive as opposed to long and arduous.
  • If the above instructions are relayed over the telephone, as opposed to in-person, then the propensity for mistakes to occur rises significantly.

However, from what we have seen so far, it will be necessary to access the application as the user to make the change – either by taking control of their session or by (perish the thought) relaying user credentials to enable someone in IT support to make the configuration change. Don’t EVER do this option by the way! Fortunately, there is a better way of updating user profile settings, using a tool whose importance has been highlighted in no uncertain terms previously on the blogthe XrmToolbox. One of the handiest out of the box tools that this provides is the User Settings Utility which…well…see for yourself:

As a consequence, application administrators can “magically” modify any of the settings contained within the User Options page, including – as we can see below – the Track email messages setting:

With a few clicks, the appropriate changes can be applied not just to a single user, but to everyone within the application – avoiding any potential end-user confusion and making our jobs easier. This simple fact is another reason why you should immediately launch the XrmToolBox whenever you find yourself with a CRM/D365CE issue that stumps you and why the community tools available for the application are top-notch.

The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that – in most typical circumstances – I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:

  • No support for control low logic (foreach loops, if statements etc.)
  • Support for only “basic” data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server.
  • Some support for the deployment of DTSX packages, but with incredibly complex deployment options.
  • Little or no support for Integrated Development Environment (IDE’s), such as Visual Studio, or other typical DevOps scenarios.

In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:

Supported now are Filter and Until conditions, alongside the expected ForEach and If conditionals.

When connecting to SQL Data destinations, we now have the ability to execute pre-copy scripts.

SSIS Integration Runtimes can now be set up from within the Azure Data Factory V2 GUI – no need to revert to PowerShell.

And finally, there is full support for storing all created resources within GitHub or Visual Studio Team Services Azure DevOps

The final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort – an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What’s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.

Although a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily.

A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a “read-only” state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail – as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:

It’s nice that the error is provided in JSON, as this can help to facilitate any error handling within your scripts.

The solution is simple – stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?

I’m sorry, but your Trigger is another castle.

The best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn’t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called ‘MyDataFactoryTrigger’:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to stop MyDataFactoryTrigger Data Factory Trigger..."
   Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -ErrorAction Stop
   Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force
   Write-Host -ForegroundColor Green "Trigger stopped successfully!"
}

Catch

{ 
    $errorMessage = $_.Exception.Message
    if($errorMessage -like '*NotFound*')
    {       
        Write-Host -ForegroundColor Yellow "Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping..."
    }

    else
    {

        throw "An error occured whilst retrieving the MyDataFactoryTrigger trigger."
    } 
}

Write-Host "Script has finished executing."

To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:

You can use pipeline Variables within arguments, which is useful if you reference the same value multiple times across your tasks.

With some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well – although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.

For completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to start MyDataFactoryTrigger Data Factory Trigger..."
   Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force -ErrorAction Stop
   Write-Host -ForegroundColor Green "Trigger started successfully!"
}

Catch

{ 
    throw "An error occured whilst starting the MyDataFactoryTrigger trigger."
}

Write-Host "Script has finished executing."

The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time…

…but you can’t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂

The very nature of how businesses or organisations operate means that the sheer volume of sensitive or confidential data that can grow over time presents a genuine challenge from a management and security point of view. Tools and applications like cloud storage, email and other information storage services can do great things; but on occasions where these are abused, such as when an employee emails out a list of business contacts to a personal email address, the penalties cannot just be a loss of reputation. Even more so with the introduction of GDPR earlier this year, there is now a clear and present danger that such actions could result in unwelcome scrutiny and also a fine for larger organisations for simply not putting the appropriate technical safeguards in place. Being able to proactively – and straightforwardly – identify & classify information types and enforce some level of control over their dissemination, while not a silver bullet in any respect, does at least demonstrate an adherence to the “appropriate technical controls” principle that GDPR in particular likes to focus on.

Azure Information Protection (AIP) seeks to address these challenges in the modern era by providing system administrators with a toolbox to enforce good internal governance and controls over documents, based on a clear classification system. The solution integrates nicely with Azure and also any on-premise environment, meaning that you don’t necessarily have to migrate your existing workloads into Office 365 to take full advantage of the service. It also offers:

  • Users the ability to track any sent document(s), find out where (and when) they have been read and revoke access at any time.
  • Full integration with the current range of Office 365 desktop clients.
  • The capability to protect non-Office documents, such as PDF’s, and requiring users to open them via a dedicated client, which checks their relevant permissions before granting access.
  • Automation capabilities via PowerShell to bulk label existing document stores, based on parameters such as files names or contents of the data (for example, mark as highly sensitive any document which contains a National Insurance Number).

Overall, I have found AIP to be a sensible and highly intuitive solution, but one that requires careful planning and configuration to realise its benefits fully. Even with this taken for granted, there is no reason why any business/organisation cannot utilise the service successfully.

However, if you are a small to medium size organisation, you may find that the Azure Information Protection offering has traditionally lacked some critical functionality.

You can see what I mean by performing a direct comparison between two flavours of Office 365 deployments – Office Business Premium (“BizPrem”) Office Professional Plus (“ProPlus”). For those who are unaware of the differences:

  • Office Business Premium is the version of Office apps that you can download with a…you guessed it…Office Business Premium subscription. This product/SKU represents the optimal choice if you are dealing with a deployment that contains less than 250 users and you want to fully utilise the whole range of features included on Office 365.
  • Office Professional Plus is the edition bundled with the following, generally more expensive subscriptions:
    • Office 365 Education A1*
    • Office 365 Enterprise E3
    • Office 365 Government G3
    • Office 365 Enterprise E4
    • Office 365 Education A4*
    • Office 365 Government G4
    • Office 365 Enterprise E5
    • Office 365 Education A5*

    For the extra price, you get a range of additional features that may be useful for large-scale IT deployments. This includes, but is not limited to, Shared Computer Activation, support for virtualised environments, group policy support and – as has traditionally been the case – an enhanced experienced while using AIP.

* In fact, these subscriptions will generally be the cheapest going on Office 365, but with the very notable caveat being that you have to be a qualifying education institute to purchase them. So no cheating I’m afraid 🙂

The salient point is that both of these Office versions support the AIP Client, the desktop application that provides the following additional button within your Office applications:

The Protect button will appear on the ribbon of Word, Excel and Outlook.

The above example, taken from an Office Business Premium deployment, differs when compared to Office Professional Plus:

Have you spotted it yet?

As mentioned in the introduction, the ability for users to explicitly set permissions on a per-document basis can be incredibly useful but is one that has been missing entirely from non-Office Business Premium subscriptions. This limitation means that users have lacked the ability to:

  • Specify (and override) the access permissions for the document – Viewer, Reviewer, Co-Author etc.
  • Assign permissions to individual users, domains or distribution/security groups.
  • Define a specified date when all access permissions will expire.

You will still be able to define organisation-level policies that determine how documents can be shared, based on a user-defined label, but you lose a high degree of personal autonomy that the solution can offer users, which – arguably – can be an essential factor in ensuring the success of the AIP deployment.

Well, the good news is, that all of this is about to change, thanks to the September 2018 General Availability wave for Azure Information Protection

This “by design” behaviour has, understandably, been a source of frustration for many, but, thanks to a UserVoice suggestion, is now no longer going to be a concern:

In the coming AIP September GA we will update the Office client requirement with the following:

“Office 365 with Office 2016 apps (minimum version 1805, build 9330.2078) when the user is assigned a license for Azure Rights Management (also known as Azure Information Protection for Office 365)”

This will allow the support of the AIP client to use protection labels in other Office subscriptions which are not ProPlus. This will require the use of Office clients which are newer then the version mentioned above and the end user should be assigned with the proper licence.

The introduction of this change was confirmed by the release of version 1.37.19.0 of the AIP client on Monday this week and is a much welcome new addition to the client. Be aware though of the requirement to be using the May 2018 build of Office 2016 apps to take advantage of this new functionality. Once overcome, this change suddenly makes the AIP offering a lot more powerful for existing small business users and a much easier sell for those who are contemplating adopting the product but cannot tolerate the cost burden associated with an Enterprise subscription. Microsoft is continually endeavouring to ensure a consistent “feedback loop” is provided for all users and customers to offer product improvement suggestions, and it is great to see this working in practice with AIP as our example. Now’s as good as time as any to evaluate AIP if you haven’t before.