Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

Once upon a time, there was a new cloud service known as Windows Azure. Over time, this cloud service developed with new features, became known more generally as just Azure, embraced the unthinkable from a technology standpoint and also went through a complete platform overhaul. Longstanding Azure users will remember the “classic” portal, with its very…distinctive…user interface:

Image courtesy of Microsoft

As the range of different services offered on Azure increased and the call for more efficient management tools became almost deafening, Microsoft announced the introduction of a new portal experience and Resource Group Management for Azure resources, both of which are now the de facto means of interacting with Azure today. The old style portal indicated above was officially discontinued earlier this year. In line with these changes, Microsoft introduced new, Resource Manager compatible versions of pretty much every major service available on the “classic” portal…with some notable exceptions. The following “classic” resources can still be created and worked with today using the new Azure portal:

This provides accommodation for those who are still operating compute resources dating back to the days of yonder, allowing you to create and manage resources that may be needed to ensure the continued success of your existing application deployment. In most cases, you will not want to create these “classic” resources as part of new project work, as the equivalent Resource Manager options should be more than sufficient for your needs. The only question mark around this concerns Cloud Services. There is no equivalent Resource Manager resource available currently, with the recommended option for new deployments being Azure Service Fabric instead. Based on my research online, there appears to be quite a feature breadth between both offerings, with Azure Service Fabric arguably being overkill for more simplistic requirements. There also appears to be some uncertainty over whether Cloud Services are technically considered deprecated or not. I would highly recommend reading Andreas Helland’s blog post on the subject and form your own opinion from there.

For both experiences, Microsoft provided a full set of automation tools in PowerShell to help developers carry out common tasks on the Azure Portal. These are split out into the standard Azure cmdlets for the “classic” experience and a set of AzureRM cmdlets for the new Resource Management approach. Although the “classic” Azure resource cmdlets are still available and supported, they very much operate in isolation – that is, if you have a requirement to interchangeably create “classic” and Resource Manager resources as part of the same script file, then you are going to encounter some major difficulties and errors. One example of this is that the ability to switch subscriptions that you have access, but not ownership, to becomes nigh on impossible to achieve. For this reason, I would recommend utilising AzureRM cmdlets solely if you ever have a requirement to create classic resources to maintain an existing deployment. To help accommodate this scenario, the New-AzureRmResource cmdlet really becomes your best friend. In a nutshell, it lets you create any Azure Resource of your choosing when executed. The catch around using it is that the exact syntax to utilise as part of the -ResourceType parameter can take some time to discover, particularly in the case of working with “classic” resources. What follows are some code snippets that, hopefully, provide you with a working set of cmdlets to create the “classic” resources highlighted in the screenshot above.

Before you begin…

To use any of the cmdlets that follow, make sure you have connected to Azure, selected your target subscription and have a Resource Group created to store your resources using the cmdlets below. You can obtain your Subscription ID by navigating to its properties within the Azure portal:

#Replace the parameter values below to suit your requirements

$subscriptionID = 36ef0d35-2775-40f7-b3a1-970a4c23eca2
$rgName = 'MyResourceGroup'
$location = 'UK South'

Set-ExecutionPolicy Unrestricted
Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId $subscriptionID
#Create Resource Group
New-AzureRMResourceGroup -Name $rgName -Location $location

With this done, you should hopefully encounter no problems executing the cmdlets that follow.

Cloud Services (classic)

#Create an empty Cloud Service (classic) resource in MyResourceGroup in the UK South region

New-AzureRmResource -ResourceName 'MyClassicCloudService' -ResourceGroupName $rgName `
                    -ResourceType 'Microsoft.ClassicCompute/domainNames' -Location $location -Force
                    

Disks (classic)

#Create a Disk (classic) resource using a Linux operating system in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyClassicDisk' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageaccounts/disks' ` 
                    -Location $location `
                    -PropertyObject @{'DiskName'='MyClassicDisk' 
                    'Label'='My Classic Disk' 
                    'VhdUri'='https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'
                    'OperatingSystem' = 'Linux'
                    } -Force
                    

Network Security groups (classic)

#Create a Network Security Group (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyNSG' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/networkSecurityGroups' `
                    -Location $location -Force
                    

Reserved IP Addresses (classic)

#Create a Reserved IP (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyReservedIP' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/reservedIps' `
                    -Location $location -Force
                    

Storage Accounts (classic)

#Create a Storage Account (classic) resource in MyResourceGroup in the UK South region.
#Storage account with use Standard Locally Redundant Storage

New-AzureRmResource -ResourceName 'MyStorageAccount' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/StorageAccounts' ` 
                    -Location $location -PropertyObject @{'AccountType' = 'Standard-LRS'} -Force
                    

Virtual Networks (classic)

#Create a Virtual Network (classic) resource in MyResourceGroup in the UK South Region

New-AzureRmResource -ResourceName 'MyVNET' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/virtualNetworks' `
                    -Location $location
                    -PropertyObject @{'AddressSpace' = @{'AddressPrefixes' = '10.0.0.0/16'}
                                      'Subnets' = @{'name' = 'MySubnet'
                                                    'AddressPrefix' = '10.0.0.0/24'
                                                    }
                                     }
                                     

VM Images (classic)

#Create a VM image (classic) resource in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyVMImage' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageAccounts/vmImages' `
                    -Location $location `
                    -PropertyObject @{'Label' = 'MyVMImage Label'
                    'Description' = 'MyVMImage Description'
                    'OperatingSystemDisk' = @{'OsState' = 'Specialized'
                                              'Caching' = 'ReadOnly'
                                              'OperatingSytem' = 'Windows'
                                              'VhdUri' = 'https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'}
                    }

Conclusions or Wot I Think

The requirement to work with the cmdlets shown in this post should only really be a concern for those who are maintaining “classic” resources as part of an ongoing deployment. It is therefore important to emphasise not to use these cmdlets to create resources for new projects. Alongside the additional complexity involved in constructing the New-AzureRmResource cmdlet, there is an abundance of new, updated AzureRM cmdlets at your disposal that enables you to more intuitively create the correct types of resources. The key benefit that these examples provide is the ability to use a single Azure PowerShell module for the management of your entire Azure estate, as opposed to having to switch back and forth between different modules. It is perhaps a testament to how flexible Azure is that cmdlets like the New-AzureRmResource exist in the first place, ultimately enabling anybody to fine-tune deployment and maintenance scripts to suit any conceivable situation.

Repeatable and time-consuming tasks are typically an excellent candidate for automation. The range of business benefits that can be realised is perhaps too broad to list, but I think that the simple ability to free up an individuals time to accomplish something better represents the ideal end goal of such activity. I have generally found that the best kind of automation is when there is a degree of human involvement on a very minimal basis – what I would term “keeping the brain involved” and not blindly assuming that the computer will always make the correct choice. A lot of the tools afforded to us when working with Microsoft cloud technologies appear to be very firmly rooted within this mindset, with frameworks such as PowerShell providing the means of carrying out sequence of tasks far quicker than a human could achieve, whilst also providing the mechanism to facilitate human involvement at key steps during any code execution cycle.

When creating a Web App via the Azure portal, you have the option of specifying the creation of an Application Insights resource, which will be automatically associated with your newly created Web App during the deployment. In most cases, you are going to want to take advantage of what this service can deliver to your application in terms of monitoring, usage patterns and error detection; the fact that I am such a major proponent of Application Insights should come as no surprise to regular readers of the blog. Should you find yourself having to deploy both of these resources in tandem via PowerShell, your first destination will likely be the New-AzureRmAppServicePlan & New-AzureRmWebApp cmdlets. For example, the following scripts when executed will create a Basic App Service Plan and Web App in the UK South region called MyWebsite, contained within a resource group with the same name:

New-AzureRMResourceGroup -Name 'MyWebsite' -Location 'UK South'
New-AzureRmAppServicePlan -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'UK South' -Tier 'Basic'
New-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'Basic' -AppServicePlan 'MyWebsite'

Next involves the creation of the Application Insights resource, which you would be forgiven for thinking could be created as part of one of the cmdlets above (à la the portal). Instead, we must resort to a generic cmdlet that can be tinkered with to create any resource on the Azure platform, per the instructions outlined in this article. Therefore, the following cmdlet needs to be executed next to create an Application Insights resource using identical parameters defined for the App Service Plan/Web App:

$appInsights = New-AzureRmResource -ResourceName 'MyWebsite' -ResourceGroupName 'MyWebsite' `
-Tag @{ applicationType = 'web'; applicationName = 'MyWebsite'} `
-ResourceType 'Microsoft.Insights/components' -Location 'UK South' `
-PropertyObject @{'Application_Type'='web'} -Force

It’s worth pointing out at this stage that you may get an error returned along the lines of No registered resource provider found for location… when executing the New-AzureRmResource cmdlet. This is because not all resource providers are automatically registered for use via PowerShell on the Azure platform. This can be resolved by executing the below cmdlets to create the appropriate registration on your subscription. This can take a few minutes to update on the platform:

#Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered.
#If not, execute Register-AzureRmResourceProvider to get it added.
#Then, keep running the first cmdlet until the registration is confirmed.

Get-AzureRmResourceProvider | Where ProviderNamespace -eq 'microsoft.insights'
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights

We now have a Web App and Application Insights resource deployed onto Azure. But, at this juncture, the Web App and Application Insight resources exist in isolation, with no link between them. To fix this, the final step involves updating the newly created Web App resource with the Application Insights Instrumentation Key, which is generated once the resource is created. Because the above snippet is storing all details of the newly created resource within the $appInsights parameter, we can very straightforwardly access this property and add a new application setting via the following cmdlets:

$appSetting = @{'APPINSIGHTS_INSTRUMENTATIONKEY'= $appInsights.Properties.InstrumentationKey}
Set-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -AppSettings $appSetting

With this final step accomplished, the resources are now associated together and this should be reflected accordingly when viewed in the portal. For completeness, the entire script to achieve the above (also including the necessary login steps) can be seen below:

Set-ExecutionPolicy Unrestricted

Login-AzureRmAccount

New-AzureRMResourceGroup -Name 'MyWebsite' -Location 'UK South'
New-AzureRmAppServicePlan -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'UK South' -Tier 'Basic'
New-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -Location 'Basic' -AppServicePlan 'MyWebsite'

#Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered.
#If not, execute Register-AzureRmResourceProvider to get it added.
#Then, keep running the first cmdlet until the registration is confirmed.

Get-AzureRmResourceProvider | Where ProviderNamespace -eq 'microsoft.insights'
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights

$appInsights = New-AzureRmResource -ResourceName 'MyWebsite' -ResourceGroupName 'MyWebsite' `
-Tag @{ applicationType = 'web'; applicationName = 'MyWebsite'} `
-ResourceType 'Microsoft.Insights/components' -Location 'UK South' `
-PropertyObject @{'Application_Type'='web'} -Force

$appSetting = @{'APPINSIGHTS_INSTRUMENTATIONKEY'= $appInsights.Properties.InstrumentationKey}
Set-AzureRmWebApp -Name 'MyWebsite' -ResourceGroupName 'MyWebsite' -AppSettings $appSetting

The above example is interesting in the sense that Application Insights does not have a set of dedicated cmdlets for creating, retrieving and updating the resource. Instead, we must rely on fairly generic cmdlets – and their associated complexity – to work with this resource type. It also seems somewhat counter-intuitive that there is no option as part of the New-AzureRmWebApp cmdlet to create an Application Insights resource alongside the Web App, as we have established the ability to carry this out via the Azure portal. Being able to specify this as an additional parameter (that would also perform the required steps involving the Instrumentation Key) would help to greatly simplify what must be a fairly common deployment scenario. As a service that receives regular updates, we can hope that Microsoft eventually supports one or both of these scenarios to ensure that any complexity towards deploying Application Insights resources in an automated release is greatly reduced.

When it comes to maintaining any kind of Infrastructure as a Service (IaaS) resource on a cloud provider, the steps involved are often more complex when compared with equivalent Platform as a Service (PaaS) offerings. This is compensated for by the level of control IaaS resources typically grant over the operating system environment and the applications that reside herein. This can be useful if, for example, your application needs to maintain a specific version of a framework/programming language and you do not want your chosen cloud provider to patch this behind the scenes, without your knowledge. One of the major trade-offs as part of all this, however, is the expectation that completing a comprehensive disaster recovery plan is no longer such a cakewalk, requiring instead significant effort to design, implement and test on regular intervals.

Microsoft Azure, like other cloud providers, offer Virtual Machines as their most “freest” IaaS offering. This facilitates a whole breadth of customisation options for the underlying operating system, including the type (Windows or Linux), default software deployed and underlying network configuration. The same problems – with respect to disaster recovery – still exist and may even be compounded if your Virtual Machine is host to an application that is publically available across the internet. Whilst you are able to make a copy of your VM somewhat quickly, there is no easy way to migrate across the public IP address of a Virtual Machine without considerable tinkering in the portal. This can lead to delays in initiating any failover or restore action, as well as the risk of introducing human error into the equation.

Fortunately, with a bit of PowerShell scripting, it is possible to fully automate this process. Say, for example, you need to restore a Virtual Machine using Managed Disks to a specific snapshot version. ensuring that the network configuration is mirrored and copied across to the new resource. The outline steps would look like this when getting things scripted out in PowerShell:

  1. Login to your Azure account and subscription where the VM resides.
  2. Create a new Managed Disk from a Recovery Services Vault snapshot.
  3. Obtain the deployment properties of the existing Virtual Machine and utilise this for the baseline configuration of the new Virtual Machine.
  4. Associate the newly created Managed Disk with the configuration created in step 3.
  5. Create a placeholder Public IP Address and swap this out with the existing Virtual Machine.
  6. Define a new Network Interface for the configuration created in step 3 and associate the existing Public IP Address to this.
  7. Create a new Network Security Group for the Network Interface added in step 6, copying all rules from the existing Virtual Machine Network Security Group
  8. Create the new Virtual Machine from the complete configuration properties.

With all these steps completed, a consistent configuration is defined to create a Virtual Machine that is almost indistinguishable from the existing one and which, more than likely, has taken less than 30 minutes to create. 🙂 Let’s jump in and take a look at an outline script that will accomplish all of this.

Before we begin…

One of the pre-requisites for executing this script is that you are have backed up your Virtual Machine using Recovery Services Vault and performed a recovery of a previous image snapshot to a Storage Account location. The below script also assumes the following regarding your target environment:

  • Your Virtual Machine must be using Managed Disks and has only one Operating System disk attached to it.
  • The affected Virtual Machine must be switch off and in a Stopped (deallocated) state on the platform.
  • The newly created Virtual Machine will reside in the same Virtual Network as the existing one.
  • The existing Network Security Group for the Virtual Machine utilises the default security rules added upon creation

That’s enough talking now! Here’s the script:

#Specify variables below:

#subID: The subscription GUID, obtainable from the portal
#rg: The Resource Group name of where the Virtual Machine is located
#vmName: The name of the Virtual Machine
#vhdURI: The URL for the restored Virtual Hard Disk (VHD)
#diskName: The name for the newly created managed disk
#location: Location for the newly created managed disk
#storageAccountName: Name of the storage account where the restored VHD is stored
#storageAccountID: The Resource ID for the storage account where the VHD is stored
#containerName: Name of the container where the restored VHD is stored
#blobName: Name of the restored VHD config file, in JSON format
#oldNICName: Name of the existing VM Network Interface Card
#newNICName: Name of the Network Interface Card to be created for the copied VM
#newPIPName: Name of the new Public IP Address that will be swapped with the existing one
#oldPIPName: Name of the existing Public IP Address that will be swapped out with the new one.
#vnetName: Name of the Virtual Network used with the current Virtual Machine
#vnetSubnet: Name of the subnet on the Virtual Network used with the current Virtual Machine.
#$oldNSG: Name of the existing Network Security Group for the Virtual Machine
#$newNSG: Name for the newly created Network Security Group for the new Virtual Machine
#$desinationPath: Path for the VM config file to be downloaded to
#$ipConfig: Name of the IP config used for the Virtual Network

$subID = '8fb17d52-b6f7-43e4-a62d-60723ec6381d'
$rg = 'myresourcegroup'
$vmName = 'myexistingvm'
$vhdURI = 'https://mystorageaccount.blob.core.windows.net/vhde00f9ddadb864fbbabef2fd683fb350d/bbc9ed4353c5465782a16cae5d512b37.vhd'
$diskName = 'mymanagedisk'
$location = 'uksouth'
$storageAccountName = 'mystorageaccount'
$storageAccountID = '/subscriptions/5dcf4664-4955-408d-9215-6325b9e28c7c/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount'
$containerName = 'vhde00f9ddadb864fbbabef2fd683fb350d'
$blobName = 'config9064da15-b889-4236-bb8a-38670d22c066.json'
$newVMName = 'mynewvm'
$oldNICName = 'myexistingnic'
$newNICName = 'mynewnic'
$newPIPName = 'myexistingpip'
$oldPIPName = 'mynewpip'
$vnetName = 'myexistingvnet'
$vnetSubnet = 'myexistingsubnet'
$oldNSG = 'myexistingnsg'
$newNSG = 'mynewnsg'
$destinationPath = 'C:\vmconfig.json'
$ipConfig = 'myipconfig'

#Login into Azure and select the correct subscription

Login-AzureRmAccount
Select-AzureRmSubscription -Subscription $subID

#Get the VM properties that requires restoring - used later.

$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rg

#Create managed disk from the storage account backup.

$diskConfig = New-AzureRmDiskConfig -AccountType 'StandardLRS' -Location $location -CreateOption Import -StorageAccountId $storageAccountID -SourceUri $vhdURI
$osDisk = New-AzureRmDisk -Disk $diskConfig -ResourceGroupName $rg -DiskName $diskName

#Download VM configuration file and define new VM configuration from this file

Set-AzureRmCurrentStorageAccount -Name $storageAccountName -ResourceGroupName $rg
Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destinationPath
$obj = ((Get-Content -Path $destinationPath -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json

$newVM = New-AzureRmVMConfig -VMSize $obj.'properties.hardwareProfile'.vmSize -VMName $newVMName

#Add newly created managed disk to new VM config

Set-AzureRmVMOSDisk -VM $newVM -ManagedDiskId $osDisk.Id -CreateOption "Attach" -Windows

#Create new Public IP and swap this out with existing IP Address

$pip = New-AzureRmPublicIpAddress -Name $newPIPName -ResourceGroupName $rg -Location $location -AllocationMethod Static
$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rg
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $vnetSubnet -VirtualNetwork $vnet
$oldNIC = Get-AzureRmNetworkInterface -Name $oldNICName -ResourceGroupName $rg
$oldNIC | Set-AzureRmNetworkInterfaceIpConfig -Name $ipConfig -PublicIpAddress $pip -Primary -Subnet $subnet
$oldNIC | Set-AzureRmNetworkInterface

#Define new VM network configuration settings, using existing public IP address, and add to configuration

$existPIP = Get-AzureRmPublicIpAddress -Name $oldPIPName -ResourceGroupName $rg
$nic = New-AzureRmNetworkInterface -Name $newNICName -ResourceGroupName $rg -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $existPIP.Id
$newVM = Add-AzureRmVMNetworkInterface -VM $newVM -Id $nic.Id

#Obtain existing Network Security Group and create a copy of it for the new Network Interface Card

$nsg = Get-AzureRmNetworkSecurityGroup -Name $oldNSG -ResourceGroupName $rg
$newNSG = New-AzureRmNetworkSecurityGroup -Name $newNSG -ResourceGroupName $rg  -Location $location
foreach($rule in $nsg.SecurityRules)
{
    $newNSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Access $rule.Access -Protocol $rule.Protocol -Direction $rule.Direction -Priority $rule.Priority -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange | Set-AzureRmNetworkSecurityGroup
}

$nic.NetworkSecurityGroup = $newNSG
$nic | Set-AzureRmNetworkInterface

#Create the VM. This may take some time to complete.

New-AzureRmVM -ResourceGroupName $rg -Location $location -VM $newVM

Conclusions or Wot I Think

Automation should be a key driver behind running an effective business and, in particular, any IT function that exists within. When architected prudently, repetitive and time wasting tasks can be eliminated and the ever pervasive risk of human error can be eliminated from business processes (unless, of course, the person defining the automation has made a mistake 🙂 ). The management of IaaS resources fits neatly into this category and, as I hope the example in this post has demonstrated, can take a particularly onerous task and reduce the complexity involved in carrying it out. This can help to save time and effort should the worst ever happen to your application. When compared with other cloud vendors, this is what ultimately makes Azure a good fit for organisations who are used to working with tools such as PowerShell; scenarios like this become almost a cakewalk to set up and require minimal additional study to get up and running.

This may be somewhat obvious from some of my more recent blog posts, but I am a huge fan of Application Insights at the moment. The best analogy I can use for it is that it’s a web developer Swiss Army Knife, that really puts you in the driving seat of understanding and maintaining your web application for the long term. The analytic tools available on the “knife” – from error reporting, user trends and server performance – provide a treasure trove of information that can help to shape successful software deployments and ensure that developers are properly understanding the impact of their work within a production environment. If you are developing web applications utilising the Azure stack, there really is no good reason not to use it; and, even if you are not, you can still utilise the tool wherever your web application is hosted.

Recently, whilst fiddling with the tool, I was surprised to find another feature which I had overlooked – Availability testing. In simple terms, this enables you to configure proactive monitoring of your web application to make sure it is responding to requests. This can be particularly useful if you have availability targets or SLAs that your web application needs to adhere to. The feature is also tightly integrated alongside the other performance metrics gathered by the service, making it the ideal choice for satisfying any data analytic requirements. I wasn’t aware of the feature until I implemented it recently, so what I wanted to do on this weeks blog post is demonstrate how straightforward it is to set up Availability testing within Application Insights.

Setting up your first Availability Test

What ultimately aids the journey towards configuring for your first test is the simplistic nature of the process; however, there are a couple of settings that may require further explanation. The remainder of this section will go through the steps involved from start to finish. Be sure to have provisioned an Application Insights resource before starting and confirm that your target website is readily accessible over the internet.

Once your Application Insights resource is provisioned, navigate to the Availability tab on the left-hand side of the resource blade:

This will then expand the Availability test summary window, which should be blank as indicated below. Click on the Add Test button to begin creating the test:

There are a couple of basic information fields that need populating at this juncture:

  • Test name: This shouldn’t need explaining 🙂 Keep in mind the importance of having a unique name for the test, particularly if you are planning on utilising the reporting capabilities discussed later on in this post.
  • Test type: You have two options at your disposal here – a simple URL ping test, that contacts the URL you specify and captures the response code from the server, or a Multi-step web test, a more powerful test that carries out a set of predefined steps when crawling through your website. Visual Studio Enterprise is required to put these tests together and there are pricing implications when using it. You can find out more about this option via the Microsoft Docs website. For this example, we are going to the use the URL ping test.
  • URL: This is the exact URL that will be contacted as part of the test. This will either be the root azurewebsites.net domain or your assigned custom domain.

Your Create test window should resemble the below after configuring the above:

The next step is to define the location where your test will be triggered from. If you are hosting your application in an existing Azure region, then it is recommended to select the next nearest geographical region for testing. Multiple regions can also be selected. In the example below, UK West is selected due to the website in question existing in the UK South region:

Next is to define your success/fail conditions for the test. This will vary depending on the type of test you a conducting, but for URL ping test‘s, you can define a timeout period for the test and also determine whether a specific HTTP response code is returned each time the test is executed. A 200 response code is the standard for indicating that a web page has loaded successfully, so this is the one that will be used for this example. You can also define this further by checking for specific keywords in the response text:

Finally, you can designate the alert settings for when a failure is detected. This can range from an email to a specified list of contacts (more on this shortly) through to a webhook notification to an external HTTP/HTTPS endpoint. Example settings are utilised in the below screenshot that should be suitable for most scenarios:

With everything configured, your Create test window should resemble the below if done correctly:

Press OK to create your test, which will begin executing immediately. You will need to wait some time for results to start appearing. For this example, we are attempting to query a website that has a specific deny rule for any non-allowed host, including the Availability test server. This is reflected in the results below, which indicate a 100% failure rate for the selected period:

Failed Test Notifications

As part of the example above, the test is configured to send email notifications out to Azure subscription owners when a fault with the application is detected. An example of how this looks can be seen below:

Whilst the above is useful for application developers/administrations to receive proactive notifications relating to their web applications, it doesn’t provide much from a reporting standpoint. Fortunately, this is where one of the features major benefits come into the equation.

Analysing Previous Availability Tests

Similar to other statistical information gathered by Application Insights, the results of each individual test are logged within Query Analytics as part of the availabilityResults schema. The following query can be used within Query Analytics to return key information for failed tests within a 24 hour period:

availabilityResults
| project timestamp, id, name, location, success, message, duration, appId, appName
| where success == 0 and timestamp > ago(24h)

If you are also wanting to utilise the Continuous Export feature, as discussed previously on the blog, then the great news is that this information is also fully exportable to your destination of choice. A SQL database, Power BI Dashboard, Azure Event Hub…your options are quite limitless 🙂 Below is the query that you can use to extract the key information from your Continuous Export stream:

SELECT 
    availabilityFlat.ArrayValue.testTimestamp AS TestTimestamp,
    availabilityFlat.ArrayValue.testRunId AS TestRunID,
    availabilityFlat.ArrayValue.testName AS TestName,
    availabilityFlat.ArrayValue.runLocation AS RunLocation,
    availabilityFlat.ArrayValue.result AS TestResult,
    availabilityFlat.ArrayValue.message AS Message,
    availabilityFlat.ArrayValue.durationMetric.value AS Duration,
    AR.EventProcessedUTCTime AS AvailabilityResultEventProcessedUTCTime
INTO AvailabilityResultsOutput
FROM AvailabilityResults AS AR
CROSS APPLY GetElements(AR.[availability]) AS availabilityFlat

Conclusions or Wot I Think

The best things in life are those that just keep on giving. My journey with Application Insights to date very much mirrors this. The number of business challenges and scenarios that I have been able to chuck towards it and walk away with an effective and easy-to-implement solution is growing every month. For me, it is slowly becoming the de facto tool to have deployed alongside any new web application. The ability to extend the tool further so that is not just providing ongoing benefit but proactive capabilities, via the Availability feature, is the main area where I feel the tool thrives the most, both now and in the future. Anything that can take the headache out of diagnosing enterprise-grade web application systems, whilst simultaneously driving insight into how a website experience can be improved for end users, wins in my book and Application Insights keeps continually proves itself in this regard.