Software deployments and updates are always a painful event for me. This feeling hasn’t subsided over time, even though pretty much every development project I am involved with these days utilises Azure DevOps Release Pipelines to automate this whole process for our team. The key thing to always stress around automation is that it does not mean that all of your software deployments suddenly become entirely successful, just because you have removed the human error aspect from the equation. In most cases, all you have done is reduce the number of times that you will have to stick your nose in to figure out what’s gone wrong 🙂

Database upgrades, which are done typically via a Data-tier Application Package (DACPAC) deployment, can be the most nerve-racking of all. Careful consideration needs putting towards the types of settings you define as part of your publish profile XML, as otherwise, you may find either a) specific database changes are blocked entirely, due to dependency issues or because intended data loss will occur or b) the types of changes you make could result in unintended data loss. This last one is a particularly salient concern and one which can be understood most fully by implementing staging or pre-production environments for your business systems. Despite some of the thought that requires factoring in before you can look to take advantage of DACPAC deployments, they do represent the natural option of choice in managing your database upgrades more formally, mainly when there is need to manage deployments into Azure SQL databases. This state of play is mostly thanks to the dedicated task that handles this all for us within Azure DevOps:

What this feature doesn’t make available to us are any appropriate steps we may need to take to generate a snapshot of the database before the deployment begins, a phase which represents both an equally desirable and necessary business requirement for software deployments. Now, I should point out that Azure SQL includes many built-in options around recovery and point in time restore options. These options are pretty extensive and enable you, depending on the database size tier you have opted for, to restore your database to any single time point over a 30-day point. The question that therefore arises from this is fairly obvious – why go to the extra bother (and cost) to create a separate database backup? Consider the following:

  • The recovery time for a point-in-time restore can vary greatly, depending on several factors relating to your database size, current pricing tier and any transactions that may be running on the database itself. In situations where a short release window constraints you and your release must satisfy a strict success/fail condition, having to go through the restore process after a database upgrade could lead to your application from being down longer then is mandated within your organisation. Having a previous version of the database already available there means you can very quickly update your application connection strings to ensure the system returns to operational use if required.
  • Having a replica copy of the database available directly after an upgrade can be incredibly useful if you need to reference data within the old version of the database post-upgrade. For example, a column may have been removed from one table and added to another, with the need to copy across all of this data accordingly. Although a point-in-time restore can be done to expose this information out, having a backup of the old version of the database available straight after the upgrade can help in expediting this work.
  • Although Microsoft promise and provide an SLA with point-in-time restore, sometimes its always best to err on the side of caution. 🙂 By taking a snapshot of the database yourself, you have full control over its hosting location and the peace of mind in knowing that the database is instantly accessible in case of an issue further down the line.

If any of the above conditions apply, then you can look to generate a copy of your database before any DACPAC deployment takes place via the use of an Azure PowerShell script task. The example script below shows how to achieve this requirement, which is designed to mirror a specific business process; namely, that a readily accessible database backup will generate before any upgrade is taken place and to create a copy of this within the same Azure SQL Server instance, but with the current date value appended onto it. When a new deployment triggers in future, the script will delete the previously backed up database:

#Define parameters for the Azure SQL Server name, resource group and target database

$servername = 'mysqlservername'
$rg = 'myresourcegroup'
$db = 'mydb'

#Get any previous backed up databases and remove these from the SQL Server instance

$sqldbs = Get-AzureRmSqlDatabase -ResourceGroupName $rg -ServerName $servername | select DatabaseName | Where-Object {$_.DatabaseName -like $db + '_Backup*'}

if (($sqldbs |  Measure-Object).Count)
{
	Remove-AzureRmSqlDatabase -ResourceGroup $rg -ServerName $servername -DatabaseName $sqldbs[0].DatabaseName
}

#Get the current date and convert it into a string, with format DD_MM_YYYY

$date = Get-Date
$date = $date.ToShortDateString()
$date = $date -replace "/", "_"

#Create the name of the new database

$copydbname = $db + '_Backup_' + $date

#Actually create the copy of the database

New-AzureRmSqlDatabaseCopy -CopyDatabaseName $copydbname -DatabaseName $db -ResourceGroupName $rg -ServerName $servername -CopyServerName $servername

Simply add this on as a pipeline task before any database deployment task, connect up to your Azure subscription and away you go!

Backups are an unchanging aspect of any piece of significant IT delivery work and one which cloud service providers, such as Microsoft, have proactively tried to implement as part of their Platform-as-a-Service (PaaS) product lines. Azure SQL is not any different in this regard and, you could argue that the point-in-time restore options listed above provide sufficient assurance in the event of a software deployment failure or a disaster-recovery scenario, therefore meaning that no extra steps are necessary to protect yourself. Consider your particular needs carefully when looking to implement a solution like the one described in this post as, although it does afford you the ability to recover quickly from any failed software deployment, it does introduce additional complexity into your deployment procedures, overall technical architecture and – perhaps most importantly – cost.

As epitomised by the recent rebranding exercise Microsoft conducted with the product, I very much see Visual Studio Team Services/Team Foundation Server Azure DevOps becoming an increasingly dominant tool for development teams the world over. I came into the product after discovering its benefits from Ben Walker at a CRMUG meeting a few years back and, since then, the product has proved to be a real boon in helping me to:

  • Manage code across various .NET and Dynamics CRM/Dynamics 365 Customer Engagement projects.
  • Formalise the code review process, via the usage of pull request approvals.
  • Log and track backlogs across various project work.
  • Automate the project build process and deployment into development environments, allowing us to identify broken builds quickly.
  • Fully automate release cycles, reducing the risk of human error and potentially laborious late night working.

I really would urge any company carrying out some form of bespoke development, mainly with the Microsoft technology stack, to give Azure DevOps a look. Contrary to some of the previous biases that would creep in as part of Microsoft’s products, Azure DevOps is very much built with openness at its core, allowing you to straightforwardly leverage sections or the whole breadth of its functionality to suit your particular purpose.

When you start working with Azure DevOps more in-depth, it becomes obvious that having a general awareness of PowerShell will hold you in good stead. Although targeted towards more Microsoft-focused deployments, PowerShell becomes your go-to tool when working with Azure in particular. Recently, I had a PowerShell Script task that previously executed with no issue whatsoever on a Windows 10 self-hosted agent. The first thing that the script did was to define the Execution Policy, using the following command:

Set-ExecutionPolicy Unrestricted

(For the uninitiated, the above is an atypical requirement for all PowerShell scripts to ensure that you do not hit any permission issues during script execution.)

As stated already, this was all working fine and dandy, until one day, when my deployments suddenly started failing with the following error message:

Executing the script manually on the machine in question, in an elevated PowerShell window, confirmed that it was not an issue on Azure DevOps side:

Well, this is one of those occasions where the actual error message tells you everything you need to know to get things working again. Whether something had changed on the machine or not as part of an update, it was now necessary to modify the PowerShell command to include the -Scope option outlined above. Therefore, the script should resemble the following:

Set-ExecutionPolicy Unrestricted -Scope CurrentUser

We can then confirm that the script no longer errors when being executed on the machine in question:

And, most importantly, the release pipeline in question now completes without any error 🙂

Perhaps its just me, but often in these types of situations, you can overlook the completely obvious and find yourself going down the rabbit hole, chasing a non-existant or grandiose solution to a particular IT problem. As this example clearly demonstrates, sometimes just reading the error message you are presented with properly can reduce a lot of wasted effort and allow you to resolve a problem faster than expected.

Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂

Automated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:

  • Removes human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes.
  • Allows for clearly defined, auditable approval cycles for release approvals.
  • Provides developers with the flexibility for structuring deployments based on any predefined number of release environments.

You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:

This template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot – with no coding required at any stage:

There may also be some additional options that need configuring as part of the Azure App Service Deploy task:

  • Publish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API.
  • Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project.
  • Exclude files from the App_Data folder: Depending on what your web application is doing, it’s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this.
  • Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages.

This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:

 

Note also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.

To get around this problem, we must address the issue that the Error Code points us toward – namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:

  • Create a staging slot on Azure for your App Service, with a name of your choosing
  • Structure a deployment task that carries out the following steps, in order:
    • Stop the staging slot on your chosen App Service instance.
    • Deploy your web application to the staging slot.
    • Start the staging slot on your chosen App Service instance.
    • (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.)
    • Perform a Swap Slot, promoting the staging slot to your live, production instance.

The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:

Even with this in place, I would still recommend that you look at taking your web application “offline” in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.

Setting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement – merely some drag & drop wizardry and some fiddling about with deployment task settings. I don’t know about you, but I am pretty happy with that state of affairs. 🙂

The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that – in most typical circumstances – I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:

  • No support for control low logic (foreach loops, if statements etc.)
  • Support for only “basic” data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server.
  • Some support for the deployment of DTSX packages, but with incredibly complex deployment options.
  • Little or no support for Integrated Development Environment (IDE’s), such as Visual Studio, or other typical DevOps scenarios.

In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:

Supported now are Filter and Until conditions, alongside the expected ForEach and If conditionals.

When connecting to SQL Data destinations, we now have the ability to execute pre-copy scripts.

SSIS Integration Runtimes can now be set up from within the Azure Data Factory V2 GUI – no need to revert to PowerShell.

And finally, there is full support for storing all created resources within GitHub or Visual Studio Team Services Azure DevOps

The final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort – an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What’s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.

Although a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily.

A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a “read-only” state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail – as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:

It’s nice that the error is provided in JSON, as this can help to facilitate any error handling within your scripts.

The solution is simple – stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?

I’m sorry, but your Trigger is another castle.

The best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn’t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called ‘MyDataFactoryTrigger’:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to stop MyDataFactoryTrigger Data Factory Trigger..."
   Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -ErrorAction Stop
   Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force
   Write-Host -ForegroundColor Green "Trigger stopped successfully!"
}

Catch

{ 
    $errorMessage = $_.Exception.Message
    if($errorMessage -like '*NotFound*')
    {       
        Write-Host -ForegroundColor Yellow "Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping..."
    }

    else
    {

        throw "An error occured whilst retrieving the MyDataFactoryTrigger trigger."
    } 
}

Write-Host "Script has finished executing."

To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:

You can use pipeline Variables within arguments, which is useful if you reference the same value multiple times across your tasks.

With some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well – although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.

For completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to start MyDataFactoryTrigger Data Factory Trigger..."
   Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force -ErrorAction Stop
   Write-Host -ForegroundColor Green "Trigger started successfully!"
}

Catch

{ 
    throw "An error occured whilst starting the MyDataFactoryTrigger trigger."
}

Write-Host "Script has finished executing."

The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time…

…but you can’t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂

A vital part of any DevOps automation activity is to facilitate automatic builds of code projects on regular cycles. In larger teams, this becomes particularly desirable for a multitude of reasons:

  • Provides a means of ensuring that builds do not contain any glaring code errors that prevent a successful compile from taking place.
  • Enables builds to be triggered in a central, “master” location, that all developers are regularly shipping code to.
  • When incorporated as part of other steps during the build stage, other automation tasks can be bolted on straightforwardly – such as the running of Unit Tests and deployment of resources to development environment(s).

The great news is that, when working with either Visual Studio Team Services or Team Foundation Server (VSTS/TFS), the process of setting up the first automated build definition of your project is straightforward. All steps can be completed directly within the GUI interface and – although some of the detailed configuration settings can be a little daunting when reviewing them for the first time – the drag and drop interface means that you can quickly build consistent definitions that are easy to understand at a glance.

One such detailed configuration setting relates to your choice of Build and Release Agent. To provide the ability to carry out automated builds (and releases), VSTS/TFS requires a dedicated agent machine designated that can be used to execute all required tasks on. There are two flavours of Build and Release Agents:

  • Microsoft Hosted: Fairly self-explanatory, this is the most logical choice if your requirements are fairly standard – for example, a simple build/release definition for an MVC ASP.NET application. Microsoft provides a range of different Build and Release Agents, covering different operating system vendors and versions of Visual Studio.
  • Self-Hosted: In some cases, you may require access to highly bespoke modules or third-party applications to ensure that your build/release definitions complete successfully. A good example may be a non-Microsoft PowerShell cmdlet library. Or, it could be that you have strict business requirements around the storage of deployment resources. This is where Self-Hosted agents come into play. By installing the latest version of the Agent onto a computer of your choice – Windows, macOS or Linux – you can then use this machine as part of your builds/releases within both VSTS & TFS. You can also take this further by setting up as many different Agent machines as you like and then group these into a “pool”, thereby allowing concurrent jobs and enhanced scalability.

The necessary trade-off when using Self-Hosted agents is that you must manage and maintain the machine yourself – for example, you will need to install a valid version of Visual Studio and SQL Server Data Tools if you wish to build SQL Server Database projects. What’s more, if issues start to occur, you are on your own (to a large extent) when diagnosing and resolving them. One such problem you may find is with permissions on the build agent, with variants of the following error that may crop up from time to time during your builds:

The error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the VSTS directory. When VSTS / TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed. I have observed that, even if the execution account on the Build Agent machine has sufficient privileges to overwrite files in the directory, you will still run into this issue. The best resolution I have found – in all cases to date – is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\VSTS\_work\SourceRootMapping\5dc5253c-785c-4de1-b722-e936d359879c\13\SourceFolder.json) and delete the file/folder in question. Removing the offending items will effectively “clean slate” the next Build definition execution, which should then complete without issue.

We are regularly told these days of the numerous benefits that “going serverless” can bring to the table, including, but not limited to, reduced management overhead, reduced cost of ownership and faster adoption of newer technology. The great challenge with all of this is that, because no two businesses are typically the same, there is often a requirement to boot up a Virtual Machine and run a specified app within a full server environment, so that we can achieve the level of required functionality to suit our business scenario. Self-Hosted agents are an excellent example of this concept in practice, and one that is hard to prevent from being regularly utilised, irrespective of how vexatious this may make us. While the ability to use Microsoft Hosted Build and Release Agents is greatly welcome (especially given there is no cost involved), it would be nice to see if this could be “opened up” to allow additional Agent machine tailoring for specific situations. I’m not going to hold my breath in this regard though – if I were in Microsoft’s shoes, I would shudder at the thought of allowing complete strangers the ability to deploy and run custom libraries on my critical LOB application. It’s probably asking for more trouble than the convenience it would provide 🙂