Software deployments and updates are always a painful event for me. This feeling hasn’t subsided over time, even though pretty much every development project I am involved with these days utilises Azure DevOps Release Pipelines to automate this whole process for our team. The key thing to always stress around automation is that it does not mean that all of your software deployments suddenly become entirely successful, just because you have removed the human error aspect from the equation. In most cases, all you have done is reduce the number of times that you will have to stick your nose in to figure out what’s gone wrong 🙂
Database upgrades, which are done typically via a Data-tier Application Package (DACPAC) deployment, can be the most nerve-racking of all. Careful consideration needs putting towards the types of settings you define as part of your publish profile XML, as otherwise, you may find either a) specific database changes are blocked entirely, due to dependency issues or because intended data loss will occur or b) the types of changes you make could result in unintended data loss. This last one is a particularly salient concern and one which can be understood most fully by implementing staging or pre-production environments for your business systems. Despite some of the thought that requires factoring in before you can look to take advantage of DACPAC deployments, they do represent the natural option of choice in managing your database upgrades more formally, mainly when there is need to manage deployments into Azure SQL databases. This state of play is mostly thanks to the dedicated task that handles this all for us within Azure DevOps:
What this feature doesn’t make available to us are any appropriate steps we may need to take to generate a snapshot of the database before the deployment begins, a phase which represents both an equally desirable and necessary business requirement for software deployments. Now, I should point out that Azure SQL includes many built-in options around recovery and point in time restore options. These options are pretty extensive and enable you, depending on the database size tier you have opted for, to restore your database to any single time point over a 30-day point. The question that therefore arises from this is fairly obvious - why go to the extra bother (and cost) to create a separate database backup? Consider the following:
- The recovery time for a point-in-time restore can vary greatly, depending on several factors relating to your database size, current pricing tier and any transactions that may be running on the database itself. In situations where a short release window constraints you and your release must satisfy a strict success/fail condition, having to go through the restore process after a database upgrade could lead to your application from being down longer then is mandated within your organisation. Having a previous version of the database already available there means you can very quickly update your application connection strings to ensure the system returns to operational use if required.
- Having a replica copy of the database available directly after an upgrade can be incredibly useful if you need to reference data within the old version of the database post-upgrade. For example, a column may have been removed from one table and added to another, with the need to copy across all of this data accordingly. Although a point-in-time restore can be done to expose this information out, having a backup of the old version of the database available straight after the upgrade can help in expediting this work.
- Although Microsoft promise and provide an SLA with point-in-time restore, sometimes its always best to err on the side of caution. 🙂 By taking a snapshot of the database yourself, you have full control over its hosting location and the peace of mind in knowing that the database is instantly accessible in case of an issue further down the line.
If any of the above conditions apply, then you can look to generate a copy of your database before any DACPAC deployment takes place via the use of an Azure PowerShell script task. The example script below shows how to achieve this requirement, which is designed to mirror a specific business process; namely, that a readily accessible database backup will generate before any upgrade is taken place and to create a copy of this within the same Azure SQL Server instance, but with the current date value appended onto it. When a new deployment triggers in future, the script will delete the previously backed up database:
#Define parameters for the Azure SQL Server name, resource group and target database
$servername = 'mysqlservername'
$rg = 'myresourcegroup'
$db = 'mydb'
#Get any previous backed up databases and remove these from the SQL Server instance
$sqldbs = Get-AzureRmSqlDatabase -ResourceGroupName $rg -ServerName $servername | select DatabaseName | Where-Object {$_.DatabaseName -like $db + '_Backup*'}
if (($sqldbs | Measure-Object).Count)
{
Remove-AzureRmSqlDatabase -ResourceGroup $rg -ServerName $servername -DatabaseName $sqldbs[0].DatabaseName
}
#Get the current date and convert it into a string, with format DD_MM_YYYY
$date = Get-Date
$date = $date.ToShortDateString()
$date = $date -replace "/", "_"
#Create the name of the new database
$copydbname = $db + '_Backup_' + $date
#Actually create the copy of the database
New-AzureRmSqlDatabaseCopy -CopyDatabaseName $copydbname -DatabaseName $db -ResourceGroupName $rg -ServerName $servername -CopyServerName $servername
Simply add this on as a pipeline task before any database deployment task, connect up to your Azure subscription and away you go!
Backups are an unchanging aspect of any piece of significant IT delivery work and one which cloud service providers, such as Microsoft, have proactively tried to implement as part of their Platform-as-a-Service (PaaS) product lines. Azure SQL is not any different in this regard and, you could argue that the point-in-time restore options listed above provide sufficient assurance in the event of a software deployment failure or a disaster-recovery scenario, therefore meaning that no extra steps are necessary to protect yourself. Consider your particular needs carefully when looking to implement a solution like the one described in this post as, although it does afford you the ability to recover quickly from any failed software deployment, it does introduce additional complexity into your deployment procedures, overall technical architecture and - perhaps most importantly - cost.