Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

Once upon a time, there was a new cloud service known as Windows Azure. Over time, this cloud service developed with new features, became known more generally as just Azure, embraced the unthinkable from a technology standpoint and also went through a complete platform overhaul. Longstanding Azure users will remember the “classic” portal, with its very…distinctive…user interface:

Image courtesy of Microsoft

As the range of different services offered on Azure increased and the call for more efficient management tools became almost deafening, Microsoft announced the introduction of a new portal experience and Resource Group Management for Azure resources, both of which are now the de facto means of interacting with Azure today. The old style portal indicated above was officially discontinued earlier this year. In line with these changes, Microsoft introduced new, Resource Manager compatible versions of pretty much every major service available on the “classic” portal…with some notable exceptions. The following “classic” resources can still be created and worked with today using the new Azure portal:

This provides accommodation for those who are still operating compute resources dating back to the days of yonder, allowing you to create and manage resources that may be needed to ensure the continued success of your existing application deployment. In most cases, you will not want to create these “classic” resources as part of new project work, as the equivalent Resource Manager options should be more than sufficient for your needs. The only question mark around this concerns Cloud Services. There is no equivalent Resource Manager resource available currently, with the recommended option for new deployments being Azure Service Fabric instead. Based on my research online, there appears to be quite a feature breadth between both offerings, with Azure Service Fabric arguably being overkill for more simplistic requirements. There also appears to be some uncertainty over whether Cloud Services are technically considered deprecated or not. I would highly recommend reading Andreas Helland’s blog post on the subject and form your own opinion from there.

For both experiences, Microsoft provided a full set of automation tools in PowerShell to help developers carry out common tasks on the Azure Portal. These are split out into the standard Azure cmdlets for the “classic” experience and a set of AzureRM cmdlets for the new Resource Management approach. Although the “classic” Azure resource cmdlets are still available and supported, they very much operate in isolation – that is, if you have a requirement to interchangeably create “classic” and Resource Manager resources as part of the same script file, then you are going to encounter some major difficulties and errors. One example of this is that the ability to switch subscriptions that you have access, but not ownership, to becomes nigh on impossible to achieve. For this reason, I would recommend utilising AzureRM cmdlets solely if you ever have a requirement to create classic resources to maintain an existing deployment. To help accommodate this scenario, the New-AzureRmResource cmdlet really becomes your best friend. In a nutshell, it lets you create any Azure Resource of your choosing when executed. The catch around using it is that the exact syntax to utilise as part of the -ResourceType parameter can take some time to discover, particularly in the case of working with “classic” resources. What follows are some code snippets that, hopefully, provide you with a working set of cmdlets to create the “classic” resources highlighted in the screenshot above.

Before you begin…

To use any of the cmdlets that follow, make sure you have connected to Azure, selected your target subscription and have a Resource Group created to store your resources using the cmdlets below. You can obtain your Subscription ID by navigating to its properties within the Azure portal:

#Replace the parameter values below to suit your requirements

$subscriptionID = 36ef0d35-2775-40f7-b3a1-970a4c23eca2
$rgName = 'MyResourceGroup'
$location = 'UK South'

Set-ExecutionPolicy Unrestricted
Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId $subscriptionID
#Create Resource Group
New-AzureRMResourceGroup -Name $rgName -Location $location

With this done, you should hopefully encounter no problems executing the cmdlets that follow.

Cloud Services (classic)

#Create an empty Cloud Service (classic) resource in MyResourceGroup in the UK South region

New-AzureRmResource -ResourceName 'MyClassicCloudService' -ResourceGroupName $rgName `
                    -ResourceType 'Microsoft.ClassicCompute/domainNames' -Location $location -Force
                    

Disks (classic)

#Create a Disk (classic) resource using a Linux operating system in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyClassicDisk' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageaccounts/disks' ` 
                    -Location $location `
                    -PropertyObject @{'DiskName'='MyClassicDisk' 
                    'Label'='My Classic Disk' 
                    'VhdUri'='https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'
                    'OperatingSystem' = 'Linux'
                    } -Force
                    

Network Security groups (classic)

#Create a Network Security Group (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyNSG' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/networkSecurityGroups' `
                    -Location $location -Force
                    

Reserved IP Addresses (classic)

#Create a Reserved IP (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyReservedIP' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/reservedIps' `
                    -Location $location -Force
                    

Storage Accounts (classic)

#Create a Storage Account (classic) resource in MyResourceGroup in the UK South region.
#Storage account with use Standard Locally Redundant Storage

New-AzureRmResource -ResourceName 'MyStorageAccount' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/StorageAccounts' ` 
                    -Location $location -PropertyObject @{'AccountType' = 'Standard-LRS'} -Force
                    

Virtual Networks (classic)

#Create a Virtual Network (classic) resource in MyResourceGroup in the UK South Region

New-AzureRmResource -ResourceName 'MyVNET' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/virtualNetworks' `
                    -Location $location
                    -PropertyObject @{'AddressSpace' = @{'AddressPrefixes' = '10.0.0.0/16'}
                                      'Subnets' = @{'name' = 'MySubnet'
                                                    'AddressPrefix' = '10.0.0.0/24'
                                                    }
                                     }
                                     

VM Images (classic)

#Create a VM image (classic) resource in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyVMImage' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageAccounts/vmImages' `
                    -Location $location `
                    -PropertyObject @{'Label' = 'MyVMImage Label'
                    'Description' = 'MyVMImage Description'
                    'OperatingSystemDisk' = @{'OsState' = 'Specialized'
                                              'Caching' = 'ReadOnly'
                                              'OperatingSytem' = 'Windows'
                                              'VhdUri' = 'https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'}
                    }

Conclusions or Wot I Think

The requirement to work with the cmdlets shown in this post should only really be a concern for those who are maintaining “classic” resources as part of an ongoing deployment. It is therefore important to emphasise not to use these cmdlets to create resources for new projects. Alongside the additional complexity involved in constructing the New-AzureRmResource cmdlet, there is an abundance of new, updated AzureRM cmdlets at your disposal that enables you to more intuitively create the correct types of resources. The key benefit that these examples provide is the ability to use a single Azure PowerShell module for the management of your entire Azure estate, as opposed to having to switch back and forth between different modules. It is perhaps a testament to how flexible Azure is that cmdlets like the New-AzureRmResource exist in the first place, ultimately enabling anybody to fine-tune deployment and maintenance scripts to suit any conceivable situation.

Repeatable and time-consuming tasks are typically an excellent candidate for automation. The range of business benefits that can be realised is perhaps too broad to list, but I think that the simple ability to free up an individuals time to accomplish something better represents the ideal end goal of such activity. I have generally found that the best kind of automation is when there is a degree of human involvement on a very minimal basis – what I would term “keeping the brain involved” and not blindly assuming that the computer will always make the correct choice. A lot of the tools afforded to us when working with Microsoft cloud technologies appear to be very firmly rooted within this mindset, with frameworks such as PowerShell providing the means of carrying out sequence of tasks far quicker than a human could achieve, whilst also providing the mechanism to facilitate human involvement at key steps during any code execution cycle.

When creating a Web App via the Azure portal, you have the option of specifying the creation of an Application Insights resource, which will be automatically associated with your newly created Web App during the deployment. In most cases, you are going to want to take advantage of what this service can deliver to your application in terms of monitoring, usage patterns and error detection; the fact that I am such a major proponent of Application Insights should come as no surprise to regular readers of the blog. Should you find yourself having to deploy both of these resources in tandem via PowerShell, your first destination will likely be the New-AzureRmAppServicePlan & New-AzureRmWebApp cmdlets. For example, the following scripts when executed will create a Basic App Service Plan and Web App in the UK South region called MyWebsite, contained within a resource group with the same name:

New-AzureRMResourceGroup -Name 'MyWebsite' -Location 'UK South'
New-AzureRmAppServicePlan -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'UK South' -Tier 'Basic'
New-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'Basic' -AppServicePlan 'MyWebsite'

Next involves the creation of the Application Insights resource, which you would be forgiven for thinking could be created as part of one of the cmdlets above (à la the portal). Instead, we must resort to a generic cmdlet that can be tinkered with to create any resource on the Azure platform, per the instructions outlined in this article. Therefore, the following cmdlet needs to be executed next to create an Application Insights resource using identical parameters defined for the App Service Plan/Web App:

$appInsights = New-AzureRmResource -ResourceName 'MyWebsite' -ResourceGroupName 'MyWebsite' `
-Tag @{ applicationType = 'web'; applicationName = 'MyWebsite'} `
-ResourceType 'Microsoft.Insights/components' -Location 'UK South' `
-PropertyObject @{'Application_Type'='web'} -Force

It’s worth pointing out at this stage that you may get an error returned along the lines of No registered resource provider found for location… when executing the New-AzureRmResource cmdlet. This is because not all resource providers are automatically registered for use via PowerShell on the Azure platform. This can be resolved by executing the below cmdlets to create the appropriate registration on your subscription. This can take a few minutes to update on the platform:

#Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered.
#If not, execute Register-AzureRmResourceProvider to get it added.
#Then, keep running the first cmdlet until the registration is confirmed.

Get-AzureRmResourceProvider | Where ProviderNamespace -eq 'microsoft.insights'
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights

We now have a Web App and Application Insights resource deployed onto Azure. But, at this juncture, the Web App and Application Insight resources exist in isolation, with no link between them. To fix this, the final step involves updating the newly created Web App resource with the Application Insights Instrumentation Key, which is generated once the resource is created. Because the above snippet is storing all details of the newly created resource within the $appInsights parameter, we can very straightforwardly access this property and add a new application setting via the following cmdlets:

$appSetting = @{'APPINSIGHTS_INSTRUMENTATIONKEY'= $appInsights.Properties.InstrumentationKey}
Set-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -AppSettings $appSetting

With this final step accomplished, the resources are now associated together and this should be reflected accordingly when viewed in the portal. For completeness, the entire script to achieve the above (also including the necessary login steps) can be seen below:

Set-ExecutionPolicy Unrestricted

Login-AzureRmAccount

New-AzureRMResourceGroup -Name 'MyWebsite' -Location 'UK South'
New-AzureRmAppServicePlan -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'UK South' -Tier 'Basic'
New-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'Basic' -AppServicePlan 'MyWebsite'

#Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered.
#If not, execute Register-AzureRmResourceProvider to get it added.
#Then, keep running the first cmdlet until the registration is confirmed.

Get-AzureRmResourceProvider | Where ProviderNamespace -eq 'microsoft.insights'
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights

$appInsights = New-AzureRmResource -ResourceName 'MyWebsite' -ResourceGroupName 'MyWebsite' `
-Tag @{ applicationType = 'web'; applicationName = 'MyWebsite'} `
-ResourceType 'Microsoft.Insights/components' -Location 'UK South' `
-PropertyObject @{'Application_Type'='web'} -Force

$appSetting = @{'APPINSIGHTS_INSTRUMENTATIONKEY'= $appInsights.Properties.InstrumentationKey}
Set-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -AppSettings $appSetting

The above example is interesting in the sense that Application Insights does not have a set of dedicated cmdlets for creating, retrieving and updating the resource. Instead, we must rely on fairly generic cmdlets – and their associated complexity – to work with this resource type. It also seems somewhat counter-intuitive that there is no option as part of the New-AzureRmWebApp cmdlet to create an Application Insights resource alongside the Web App, as we have established the ability to carry this out via the Azure portal. Being able to specify this as an additional parameter (that would also perform the required steps involving the Instrumentation Key) would help to greatly simplify what must be a fairly common deployment scenario. As a service that receives regular updates, we can hope that Microsoft eventually supports one or both of these scenarios to ensure that any complexity towards deploying Application Insights resources in an automated release is greatly reduced.

When considering whether or not to shift your existing SQL workloads to a single database offering on Azure SQL, one of the major pros is the breadth of capabilities the service can offer when compared with other vendors or in comparison to SQL Server on an Azure Virtual Machine. A list of these may include:

  • High feature parity with the latest on-premise SQL Server offering.
  • Built-in support for Enterprise product features, such as Transparent Database Encryption.
  • Security management features, such as firewalls and (optional) integration with Azure SQL Database Threat Detection for proactive monitoring.
  • Ability to quickly scale a database from a 2GB database with low CPU consumption to a mammoth 4TB database, with a significant pool of CPU/memory resources to match.

It is the last one of these that makes Azure SQL database a particularly good fit for web application deployments that have unpredictable user loads at the time of deployment or, as we have seen previously on the blog, when you are wanting to deploy out a LOB reporting database that houses Dynamics 365 Customer Engagement instance data. Administrators can very straightforwardly scale or downscale a database at any time within the portal or, if you are feeling particularly clever, you can look to implement automatic scaling based on Database Throughput Unit (DTU) consumption. This can aid towards making your query execution times as speedy as possible.

Database scaling, I have found, is very straightforward to get your head around and works like a charm for the most part…except, of course, when you get rather cryptic error messages like the one demonstrated below:

I got this error recently when attempting to scale an S0 5GB database down to Basic 2GB tier. To cut a long story short, I had temporarily scaled up the database to give me increased DTU capacity for a particularly intensive query, and wanted to scale it back to its original pricing tier. You can perhaps understand my confusion about why this error was occurring. After further research and escalation to Microsoft, it turns out that the database was still consuming unused disk space on the platform, thereby violating any size limits imposed by moving to a lower price tier. To resolve the issue, there are some tasks that need to be performed on the database to get it into a “downscale-ready state”. These consist of a series of T-SQL scripts, which I would caution against using if the database is currently in use, due to potential performance impacts. If you have found yourself in the same boat as me and are happy to proceed, the steps involved are as follows:

  1. To begin with, the script below will execute the DBCC SHRINKDATABASE command against the database, setting the database file max size to the value specified on the @DesiredFileSize parameter. The script is compiled so as to perform the shrinking in “chunks” based on the value of the @ShrinkChunkSize parameter, which may be useful in managing DTU consumption:
SET NOCOUNT ON

DECLARE @CurrentFileSize INT, @DesiredFileSize INT, @ShrinkChunkSize INT, @ActualSizeMB INT,
		@ErrorIndication INT, @dbFileID INT = 1, @LastSize INT, @SqlCMD NVARCHAR(MAX),
		@msg NVARCHAR(100)

/*Set these values for the current operation, size is in MB*/
SET @DesiredFileSize = 2000  /* filesize is in MB */
SET @ShrinkChunkSize = 50 /* chunk size is in MB */

SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid = @dbFileID

SELECT @ActualSizeMB = (SUM(total_pages) / 128) FROM sys.allocation_units

SET @msg = 'Current File Size: ' + CAST(@CurrentFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET  @msg = 'Actual used Size: ' + CAST(@ActualSizeMB AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Desired File Size: ' + CAST(@DesiredFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Interation shrink size: ' + CAST(@ShrinkChunkSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @ErrorIndication = CASE
							WHEN @DesiredFileSize > @CurrentFileSize THEN 1
							WHEN @ActualSizeMB > @DesiredFileSize THEN 2
							ELSE 0 END

IF @ErrorIndication = 1  
	RAISERROR('[Error] Desired size bigger than current size',0,0) WITH NOWAIT
IF @ErrorIndication = 2  
	RAISERROR('[Error] Actual size is bigger then desired size',0,0) WITH NOWAIT
IF @ErrorIndication = 0 
	RAISERROR('Desired Size check - OK',0,0) WITH NOWAIT

SET @LastSize = @CurrentFileSize + 1

WHILE @CurrentFileSize > @DesiredFileSize /*check if we got the desired size*/ AND @LastSize>@CurrentFileSize /* check if there is progress*/ AND @ErrorIndication=0
BEGIN
	SET @msg = CAST(GETDATE() AS VARCHAR(100)) + ' - Iteration starting'
	RAISERROR(@msg,0,0) WITH NOWAIT
	SELECT @LastSize = size/128 FROM sysfiles WHERE fileid = @dbFileID
	SET @sqlCMD = 'DBCC SHRINKFILE('+ CAST(@dbFileID AS VARCHAR(7)) + ',' + CAST(@CurrentFileSize-@ShrinkChunkSize AS VARCHAR(7)) + ') WITH NO_INFOMSGS;'
	EXEC (@sqlCMD)
	SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid  =@dbFileID
	SET @msg = CAST(getdate() AS VARCHAR(100)) + ' - Iteration completed. current size is: ' + CAST(@CurrentFileSize AS VARCHAR(10))
	RAISERROR(@msg,0,0) WITH NOWAIT
END
PRINT 'Done' 
  1. With the database successfully shrunk, verify that the size of the database does not exceed your target @DesiredFileSize value by running the following query:
SELECT * FROM sys.database_files
 
SELECT (SUM(reserved_page_count) * 8192) / 1024 / 1024 AS DbSizeInMB
FROM    sys.dm_db_partition_stats
  1. Although by this stage, the database file sizes should be underneath 2GB, the maximum size of the database is still set to match the pricing tier level. To fix this, execute the following script, substituting the name of your database where appropriate:
ALTER DATABASE MyDatabase MODIFY (MAXSIZE=2GB) 

 You can confirm that this command has been executed successfully by then running the following query and reviewing the output:

SELECT CAST(DATABASEPROPERTYEX ('MyDatabase', 'MaxSizeInBytes') AS FLOAT)/1024.00/1024.00/1024.00 AS 'DB Size in GB'
  1. With the above commands executed, you are now in a position to scale down your database without issue. There are a few ways this can be done but, as you likely already have SQL Server Management Studio or similar open to run the above queries, you can modify the tier of your database via this handy script:
--Scaling down to Basic is easy, as there is only one Max Size/Service Level Objective
--Therefore, just specify Edition

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Basic');

--For other tiers, specify the size of the DB.
--In this example, we are scaling down from Premium P1 1TB to Standard S2 250GB tier

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Standard', MAXSIZE = 250 GB, SERVICE_OBJECTIVE = 'S2');

Although the script will likely execute immediately and indicate as such in any output, the actual scaling operation on the backend Azure platform can take some time to complete – usually about 5-10 minutes for lower sized databases.

Whilst I was relieved that a workaround was available to get the database scaled down correctly, it would have been useful if the above error message was signposted better or if there was some kind of online support article that detailed that this could be a potential issue when moving a database between various pricing/sizing tiers. Hopefully, by sharing the above steps, others who in the same boat can very quickly diagnose and resolve the issue without hammering your credit card with increased database usage charges in the process. 🙂

When it comes to maintaining any kind of Infrastructure as a Service (IaaS) resource on a cloud provider, the steps involved are often more complex when compared with equivalent Platform as a Service (PaaS) offerings. This is compensated for by the level of control IaaS resources typically grant over the operating system environment and the applications that reside herein. This can be useful if, for example, your application needs to maintain a specific version of a framework/programming language and you do not want your chosen cloud provider to patch this behind the scenes, without your knowledge. One of the major trade-offs as part of all this, however, is the expectation that completing a comprehensive disaster recovery plan is no longer such a cakewalk, requiring instead significant effort to design, implement and test on regular intervals.

Microsoft Azure, like other cloud providers, offer Virtual Machines as their most “freest” IaaS offering. This facilitates a whole breadth of customisation options for the underlying operating system, including the type (Windows or Linux), default software deployed and underlying network configuration. The same problems – with respect to disaster recovery – still exist and may even be compounded if your Virtual Machine is host to an application that is publically available across the internet. Whilst you are able to make a copy of your VM somewhat quickly, there is no easy way to migrate across the public IP address of a Virtual Machine without considerable tinkering in the portal. This can lead to delays in initiating any failover or restore action, as well as the risk of introducing human error into the equation.

Fortunately, with a bit of PowerShell scripting, it is possible to fully automate this process. Say, for example, you need to restore a Virtual Machine using Managed Disks to a specific snapshot version. ensuring that the network configuration is mirrored and copied across to the new resource. The outline steps would look like this when getting things scripted out in PowerShell:

  1. Login to your Azure account and subscription where the VM resides.
  2. Create a new Managed Disk from a Recovery Services Vault snapshot.
  3. Obtain the deployment properties of the existing Virtual Machine and utilise this for the baseline configuration of the new Virtual Machine.
  4. Associate the newly created Managed Disk with the configuration created in step 3.
  5. Create a placeholder Public IP Address and swap this out with the existing Virtual Machine.
  6. Define a new Network Interface for the configuration created in step 3 and associate the existing Public IP Address to this.
  7. Create a new Network Security Group for the Network Interface added in step 6, copying all rules from the existing Virtual Machine Network Security Group
  8. Create the new Virtual Machine from the complete configuration properties.

With all these steps completed, a consistent configuration is defined to create a Virtual Machine that is almost indistinguishable from the existing one and which, more than likely, has taken less than 30 minutes to create. 🙂 Let’s jump in and take a look at an outline script that will accomplish all of this.

Before we begin…

One of the pre-requisites for executing this script is that you are have backed up your Virtual Machine using Recovery Services Vault and performed a recovery of a previous image snapshot to a Storage Account location. The below script also assumes the following regarding your target environment:

  • Your Virtual Machine must be using Managed Disks and has only one Operating System disk attached to it.
  • The affected Virtual Machine must be switch off and in a Stopped (deallocated) state on the platform.
  • The newly created Virtual Machine will reside in the same Virtual Network as the existing one.
  • The existing Network Security Group for the Virtual Machine utilises the default security rules added upon creation

That’s enough talking now! Here’s the script:

#Specify variables below:

#subID: The subscription GUID, obtainable from the portal
#rg: The Resource Group name of where the Virtual Machine is located
#vmName: The name of the Virtual Machine
#vhdURI: The URL for the restored Virtual Hard Disk (VHD)
#diskName: The name for the newly created managed disk
#location: Location for the newly created managed disk
#storageAccountName: Name of the storage account where the restored VHD is stored
#storageAccountID: The Resource ID for the storage account where the VHD is stored
#containerName: Name of the container where the restored VHD is stored
#blobName: Name of the restored VHD config file, in JSON format
#oldNICName: Name of the existing VM Network Interface Card
#newNICName: Name of the Network Interface Card to be created for the copied VM
#newPIPName: Name of the new Public IP Address that will be swapped with the existing one
#oldPIPName: Name of the existing Public IP Address that will be swapped out with the new one.
#vnetName: Name of the Virtual Network used with the current Virtual Machine
#vnetSubnet: Name of the subnet on the Virtual Network used with the current Virtual Machine.
#$oldNSG: Name of the existing Network Security Group for the Virtual Machine
#$newNSG: Name for the newly created Network Security Group for the new Virtual Machine
#$desinationPath: Path for the VM config file to be downloaded to
#$ipConfig: Name of the IP config used for the Virtual Network

$subID = '8fb17d52-b6f7-43e4-a62d-60723ec6381d'
$rg = 'myresourcegroup'
$vmName = 'myexistingvm'
$vhdURI = 'https://mystorageaccount.blob.core.windows.net/vhde00f9ddadb864fbbabef2fd683fb350d/bbc9ed4353c5465782a16cae5d512b37.vhd'
$diskName = 'mymanagedisk'
$location = 'uksouth'
$storageAccountName = 'mystorageaccount'
$storageAccountID = '/subscriptions/5dcf4664-4955-408d-9215-6325b9e28c7c/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount'
$containerName = 'vhde00f9ddadb864fbbabef2fd683fb350d'
$blobName = 'config9064da15-b889-4236-bb8a-38670d22c066.json'
$newVMName = 'mynewvm'
$oldNICName = 'myexistingnic'
$newNICName = 'mynewnic'
$newPIPName = 'myexistingpip'
$oldPIPName = 'mynewpip'
$vnetName = 'myexistingvnet'
$vnetSubnet = 'myexistingsubnet'
$oldNSG = 'myexistingnsg'
$newNSG = 'mynewnsg'
$destinationPath = 'C:\vmconfig.json'
$ipConfig = 'myipconfig'

#Login into Azure and select the correct subscription

Login-AzureRmAccount
Select-AzureRmSubscription -Subscription $subID

#Get the VM properties that requires restoring - used later.

$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rg

#Create managed disk from the storage account backup.

$diskConfig = New-AzureRmDiskConfig -AccountType 'StandardLRS' -Location $location -CreateOption Import -StorageAccountId $storageAccountID -SourceUri $vhdURI
$osDisk = New-AzureRmDisk -Disk $diskConfig -ResourceGroupName $rg -DiskName $diskName

#Download VM configuration file and define new VM configuration from this file

Set-AzureRmCurrentStorageAccount -Name $storageAccountName -ResourceGroupName $rg
Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destinationPath
$obj = ((Get-Content -Path $destinationPath -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json

$newVM = New-AzureRmVMConfig -VMSize $obj.'properties.hardwareProfile'.vmSize -VMName $newVMName

#Add newly created managed disk to new VM config

Set-AzureRmVMOSDisk -VM $newVM -ManagedDiskId $osDisk.Id -CreateOption "Attach" -Windows

#Create new Public IP and swap this out with existing IP Address

$pip = New-AzureRmPublicIpAddress -Name $newPIPName -ResourceGroupName $rg -Location $location -AllocationMethod Static
$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rg
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $vnetSubnet -VirtualNetwork $vnet
$oldNIC = Get-AzureRmNetworkInterface -Name $oldNICName -ResourceGroupName $rg
$oldNIC | Set-AzureRmNetworkInterfaceIpConfig -Name $ipConfig -PublicIpAddress $pip -Primary -Subnet $subnet
$oldNIC | Set-AzureRmNetworkInterface

#Define new VM network configuration settings, using existing public IP address, and add to configuration

$existPIP = Get-AzureRmPublicIpAddress -Name $oldPIPName -ResourceGroupName $rg
$nic = New-AzureRmNetworkInterface -Name $newNICName -ResourceGroupName $rg -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $existPIP.Id
$newVM = Add-AzureRmVMNetworkInterface -VM $newVM -Id $nic.Id

#Obtain existing Network Security Group and create a copy of it for the new Network Interface Card

$nsg = Get-AzureRmNetworkSecurityGroup -Name $oldNSG -ResourceGroupName $rg
$newNSG = New-AzureRmNetworkSecurityGroup -Name $newNSG -ResourceGroupName $rg  -Location $location
foreach($rule in $nsg.SecurityRules)
{
    $newNSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Access $rule.Access -Protocol $rule.Protocol -Direction $rule.Direction -Priority $rule.Priority -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange | Set-AzureRmNetworkSecurityGroup
}

$nic.NetworkSecurityGroup = $newNSG
$nic | Set-AzureRmNetworkInterface

#Create the VM. This may take some time to complete.

New-AzureRmVM -ResourceGroupName $rg -Location $location -VM $newVM

Conclusions or Wot I Think

Automation should be a key driver behind running an effective business and, in particular, any IT function that exists within. When architected prudently, repetitive and time wasting tasks can be eliminated and the ever pervasive risk of human error can be eliminated from business processes (unless, of course, the person defining the automation has made a mistake 🙂 ). The management of IaaS resources fits neatly into this category and, as I hope the example in this post has demonstrated, can take a particularly onerous task and reduce the complexity involved in carrying it out. This can help to save time and effort should the worst ever happen to your application. When compared with other cloud vendors, this is what ultimately makes Azure a good fit for organisations who are used to working with tools such as PowerShell; scenarios like this become almost a cakewalk to set up and require minimal additional study to get up and running.