When it comes to maintaining any kind of Infrastructure as a Service (IaaS) resource on a cloud provider, the steps involved are often more complex when compared with equivalent Platform as a Service (PaaS) offerings. This is compensated for by the level of control IaaS resources typically grant over the operating system environment and the applications that reside herein. This can be useful if, for example, your application needs to maintain a specific version of a framework/programming language and you do not want your chosen cloud provider to patch this behind the scenes, without your knowledge. One of the major trade-offs as part of all this, however, is the expectation that completing a comprehensive disaster recovery plan is no longer such a cakewalk, requiring instead significant effort to design, implement and test on regular intervals.

Microsoft Azure, like other cloud providers, offer Virtual Machines as their most “freest” IaaS offering. This facilitates a whole breadth of customisation options for the underlying operating system, including the type (Windows or Linux), default software deployed and underlying network configuration. The same problems – with respect to disaster recovery – still exist and may even be compounded if your Virtual Machine is host to an application that is publically available across the internet. Whilst you are able to make a copy of your VM somewhat quickly, there is no easy way to migrate across the public IP address of a Virtual Machine without considerable tinkering in the portal. This can lead to delays in initiating any failover or restore action, as well as the risk of introducing human error into the equation.

Fortunately, with a bit of PowerShell scripting, it is possible to fully automate this process. Say, for example, you need to restore a Virtual Machine using Managed Disks to a specific snapshot version. ensuring that the network configuration is mirrored and copied across to the new resource. The outline steps would look like this when getting things scripted out in PowerShell:

  1. Login to your Azure account and subscription where the VM resides.
  2. Create a new Managed Disk from a Recovery Services Vault snapshot.
  3. Obtain the deployment properties of the existing Virtual Machine and utilise this for the baseline configuration of the new Virtual Machine.
  4. Associate the newly created Managed Disk with the configuration created in step 3.
  5. Create a placeholder Public IP Address and swap this out with the existing Virtual Machine.
  6. Define a new Network Interface for the configuration created in step 3 and associate the existing Public IP Address to this.
  7. Create a new Network Security Group for the Network Interface added in step 6, copying all rules from the existing Virtual Machine Network Security Group
  8. Create the new Virtual Machine from the complete configuration properties.

With all these steps completed, a consistent configuration is defined to create a Virtual Machine that is almost indistinguishable from the existing one and which, more than likely, has taken less than 30 minutes to create. ūüôā Let’s jump in and take a look at an outline script that will accomplish all of this.

Before we begin…

One of the pre-requisites for executing this script is that you are have backed up your Virtual Machine using Recovery Services Vault and performed a recovery of a previous image snapshot to a Storage Account location. The below script also assumes the following regarding your target environment:

  • Your Virtual Machine must be using Managed Disks and has only one Operating System disk attached to it.
  • The affected Virtual Machine must be switch off and in a¬†Stopped (deallocated)¬†state on the platform.
  • The newly created Virtual Machine will reside in the same Virtual Network as the existing one.
  • The existing Network Security Group for the Virtual Machine utilises the default security rules added upon creation

That’s enough talking now! Here’s the script:

#Specify variables below:

#subID: The subscription GUID, obtainable from the portal
#rg: The Resource Group name of where the Virtual Machine is located
#vmName: The name of the Virtual Machine
#vhdURI: The URL for the restored Virtual Hard Disk (VHD)
#diskName: The name for the newly created managed disk
#location: Location for the newly created managed disk
#storageAccountName: Name of the storage account where the restored VHD is stored
#storageAccountID: The Resource ID for the storage account where the VHD is stored
#containerName: Name of the container where the restored VHD is stored
#blobName: Name of the restored VHD config file, in JSON format
#oldNICName: Name of the existing VM Network Interface Card
#newNICName: Name of the Network Interface Card to be created for the copied VM
#newPIPName: Name of the new Public IP Address that will be swapped with the existing one
#oldPIPName: Name of the existing Public IP Address that will be swapped out with the new one.
#vnetName: Name of the Virtual Network used with the current Virtual Machine
#vnetSubnet: Name of the subnet on the Virtual Network used with the current Virtual Machine.
#$oldNSG: Name of the existing Network Security Group for the Virtual Machine
#$newNSG: Name for the newly created Network Security Group for the new Virtual Machine
#$desinationPath: Path for the VM config file to be downloaded to
#$ipConfig: Name of the IP config used for the Virtual Network

$subID = '8fb17d52-b6f7-43e4-a62d-60723ec6381d'
$rg = 'myresourcegroup'
$vmName = 'myexistingvm'
$vhdURI = 'https://mystorageaccount.blob.core.windows.net/vhde00f9ddadb864fbbabef2fd683fb350d/bbc9ed4353c5465782a16cae5d512b37.vhd'
$diskName = 'mymanagedisk'
$location = 'uksouth'
$storageAccountName = 'mystorageaccount'
$storageAccountID = '/subscriptions/5dcf4664-4955-408d-9215-6325b9e28c7c/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount'
$containerName = 'vhde00f9ddadb864fbbabef2fd683fb350d'
$blobName = 'config9064da15-b889-4236-bb8a-38670d22c066.json'
$newVMName = 'mynewvm'
$oldNICName = 'myexistingnic'
$newNICName = 'mynewnic'
$newPIPName = 'myexistingpip'
$oldPIPName = 'mynewpip'
$vnetName = 'myexistingvnet'
$vnetSubnet = 'myexistingsubnet'
$oldNSG = 'myexistingnsg'
$newNSG = 'mynewnsg'
$destinationPath = 'C:\vmconfig.json'
$ipConfig = 'myipconfig'

#Login into Azure and select the correct subscription

Login-AzureRmAccount
Select-AzureRmSubscription -Subscription $subID

#Get the VM properties that requires restoring - used later.

$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rg

#Create managed disk from the storage account backup.

$diskConfig = New-AzureRmDiskConfig -AccountType 'StandardLRS' -Location $location -CreateOption Import -StorageAccountId $storageAccountID -SourceUri $vhdURI
$osDisk = New-AzureRmDisk -Disk $diskConfig -ResourceGroupName $rg -DiskName $diskName

#Download VM configuration file and define new VM configuration from this file

Set-AzureRmCurrentStorageAccount -Name $storageAccountName -ResourceGroupName $rg
Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destinationPath
$obj = ((Get-Content -Path $destinationPath -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json

$newVM = New-AzureRmVMConfig -VMSize $obj.'properties.hardwareProfile'.vmSize -VMName $newVMName

#Add newly created managed disk to new VM config

Set-AzureRmVMOSDisk -VM $newVM -ManagedDiskId $osDisk.Id -CreateOption "Attach" -Windows

#Create new Public IP and swap this out with existing IP Address

$pip = New-AzureRmPublicIpAddress -Name $newPIPName -ResourceGroupName $rg -Location $location -AllocationMethod Static
$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rg
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $vnetSubnet -VirtualNetwork $vnet
$oldNIC = Get-AzureRmNetworkInterface -Name $oldNICName -ResourceGroupName $rg
$oldNIC | Set-AzureRmNetworkInterfaceIpConfig -Name $ipConfig -PublicIpAddress $pip -Primary -Subnet $subnet
$oldNIC | Set-AzureRmNetworkInterface

#Define new VM network configuration settings, using existing public IP address, and add to configuration

$existPIP = Get-AzureRmPublicIpAddress -Name $oldPIPName -ResourceGroupName $rg
$nic = New-AzureRmNetworkInterface -Name $newNICName -ResourceGroupName $rg -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $existPIP.Id
$newVM = Add-AzureRmVMNetworkInterface -VM $newVM -Id $nic.Id

#Obtain existing Network Security Group and create a copy of it for the new Network Interface Card

$nsg = Get-AzureRmNetworkSecurityGroup -Name $oldNSG -ResourceGroupName $rg
$newNSG = New-AzureRmNetworkSecurityGroup -Name $newNSG -ResourceGroupName $rg  -Location $location
foreach($rule in $nsg.SecurityRules)
{
    $newNSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Access $rule.Access -Protocol $rule.Protocol -Direction $rule.Direction -Priority $rule.Priority -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange | Set-AzureRmNetworkSecurityGroup
}

$nic.NetworkSecurityGroup = $newNSG
$nic | Set-AzureRmNetworkInterface

#Create the VM. This may take some time to complete.

New-AzureRmVM -ResourceGroupName $rg -Location $location -VM $newVM

Conclusions or Wot I Think

Automation should be a key driver behind running an effective business and, in particular, any IT function that exists within. When architected prudently, repetitive and time wasting tasks can be eliminated and the ever pervasive risk of human error can be eliminated from business processes (unless, of course, the person defining the automation has made a mistake ūüôā ). The management of IaaS resources fits neatly into this category and, as I hope the example in this post has demonstrated, can take a particularly onerous task and reduce the complexity involved in carrying it out. This can help to save time and effort should the worst ever happen to your application. When compared with other cloud vendors, this is what ultimately makes Azure a good fit for organisations who are used to working with tools such as PowerShell; scenarios like this become almost a cakewalk to set up and require minimal additional study to get up and running.

This may be somewhat obvious from some of my more recent blog posts, but I am a huge fan of Application Insights at the moment. The best analogy I can use for it is that it’s a web developer Swiss Army Knife, that really puts you in the driving seat of understanding and maintaining your web application for the long term. The analytic tools available on the “knife” – from error reporting, user trends and server performance – provide a treasure trove of information that can help to shape successful software deployments and ensure that developers are properly understanding the impact of their work within a production environment. If you are developing web applications utilising the Azure stack, there really is no good reason not to use it; and, even if you are not, you can still utilise the tool wherever your web application is hosted.

Recently, whilst fiddling with the tool, I was surprised to find another feature which I had overlooked – Availability testing. In simple terms, this enables you to configure proactive monitoring of your web application to make sure it is responding to requests. This can be particularly useful if you have availability targets or SLAs that your web application needs to adhere to. The feature is also tightly integrated alongside the other performance metrics gathered by the service, making it the ideal choice for satisfying any data analytic requirements. I wasn’t aware of the feature until I implemented it recently, so what I wanted to do on this weeks blog post is demonstrate how straightforward it is to set up Availability¬†testing within Application Insights.

Setting up your first Availability Test

What ultimately aids the journey towards configuring for your first test is the simplistic nature of the process; however, there are a couple of settings that may require further explanation. The remainder of this section will go through the steps involved from start to finish. Be sure to have provisioned an Application Insights resource before starting and confirm that your target website is readily accessible over the internet.

Once your Application Insights resource is provisioned, navigate to the Availability tab on the left-hand side of the resource blade:

This will then expand the Availability test summary window, which should be blank as indicated below. Click on the Add Test button to begin creating the test:

There are a couple of basic information fields that need populating at this juncture:

  • Test name: This shouldn’t need explaining ūüôā Keep in mind the importance of having a unique name for the test, particularly if you are planning on utilising the reporting capabilities discussed later on in this post.
  • Test type: You have two options at your disposal here – a simple¬†URL ping test, that contacts the URL you specify and captures the response code from the server, or a Multi-step web test, a more powerful test that carries out a set of predefined steps when crawling through your website. Visual Studio Enterprise is required to put these tests together and there are pricing implications when using it. You can find out more about this option via the Microsoft Docs website. For this example, we are going to the use the¬†URL ping test.
  • URL: This is the exact URL that will be contacted as part of the test. This will either be the root¬†azurewebsites.net domain or your assigned custom domain.

Your Create test window should resemble the below after configuring the above:

The next step is to define the location where your test will be triggered from. If you are hosting your application in an existing Azure region, then it is recommended to select the next nearest geographical region for testing. Multiple regions can also be selected. In the example below, UK West is selected due to the website in question existing in the UK South region:

Next is to define your success/fail conditions for the test. This will vary depending on the type of test you a conducting, but for¬†URL ping test‘s, you can define a timeout period for the test and also determine whether a specific HTTP response code is returned each time the test is executed. A 200 response code is the standard for indicating that a web page has loaded successfully, so this is the one that will be used for this example. You can also define this further by checking for specific keywords in the response text:

Finally, you can designate the alert settings for when a failure is detected. This can range from an email to a specified list of contacts (more on this shortly) through to a webhook notification to an external HTTP/HTTPS endpoint. Example settings are utilised in the below screenshot that should be suitable for most scenarios:

With everything configured, your Create test window should resemble the below if done correctly:

Press OK to create your test, which will begin executing immediately. You will need to wait some time for results to start appearing. For this example, we are attempting to query a website that has a specific deny rule for any non-allowed host, including the Availability test server. This is reflected in the results below, which indicate a 100% failure rate for the selected period:

Failed Test Notifications

As part of the example above, the test is configured to send email notifications out to Azure subscription owners when a fault with the application is detected. An example of how this looks can be seen below:

Whilst the above is useful for application developers/administrations to receive proactive notifications relating to their web applications, it doesn’t provide much from a reporting standpoint. Fortunately, this is where one of the features major benefits come into the equation.

Analysing Previous Availability Tests

Similar to other statistical information gathered by Application Insights, the results of each individual test are logged within Query Analytics as part of the availabilityResults schema. The following query can be used within Query Analytics to return key information for failed tests within a 24 hour period:

availabilityResults
| project timestamp, id, name, location, success, message, duration, appId, appName
| where success == 0 and timestamp > ago(24h)

If you are also wanting to utilise the Continuous Export feature, as discussed previously on the blog, then the great news is that this information is also fully exportable to your destination of choice. A SQL database, Power BI Dashboard, Azure Event Hub…your options are quite limitless ūüôā Below is the query that you can use to extract the key information from your Continuous Export stream:

SELECT 
    availabilityFlat.ArrayValue.testTimestamp AS TestTimestamp,
    availabilityFlat.ArrayValue.testRunId AS TestRunID,
    availabilityFlat.ArrayValue.testName AS TestName,
    availabilityFlat.ArrayValue.runLocation AS RunLocation,
    availabilityFlat.ArrayValue.result AS TestResult,
    availabilityFlat.ArrayValue.message AS Message,
    availabilityFlat.ArrayValue.durationMetric.value AS Duration,
    AR.EventProcessedUTCTime AS AvailabilityResultEventProcessedUTCTime
INTO AvailabilityResultsOutput
FROM AvailabilityResults AS AR
CROSS APPLY GetElements(AR.[availability]) AS availabilityFlat

Conclusions or Wot I Think

The best things in life are those that just keep on giving. My journey with Application Insights to date very much mirrors this. The number of business challenges and scenarios that I have been able to chuck towards it and walk away with an effective and easy-to-implement solution is growing every month. For me, it is slowly becoming the de facto tool to have deployed alongside any new web application. The ability to extend the tool further so that is not just providing ongoing benefit but proactive capabilities, via the Availability feature, is the main area where I feel the tool thrives the most, both now and in the future. Anything that can take the headache out of diagnosing enterprise-grade web application systems, whilst simultaneously driving insight into how a website experience can be improved for end users, wins in my book and Application Insights keeps continually proves itself in this regard.

If you are looking for an easy-to-use and highly expandable mail relay service, SendGrid represents the most developer-friendly solution out in the market today. What’s even better is that it’s available on Azure, making it the ideal choice if you are developing an existing solution on the Azure stack. The best thing I like about the service is the extensive documentation covering every aspect of its Web API, structured to provide a clear explanation of endpoint methods, required properties, and example outputs – exactly the right way that all technical documentation should be laid out.

I recently had a requirement to integrate with the SendGrid API to extrapolate email statistic information into a SQL database. My initial thoughts were that I would need to resort to a bespoke C# solution to achieve these requirements. However, keenly remembering my commitment this year to find opportunities to utilise the service more, I decided to investigate whether Microsoft Flow could streamline this process. Suffice to say, I was pleasantly surprised and what I wanted to do as part of this week’s blog post was demonstrate how I was able to take advantage of Microsoft Flow to deliver my requirements. In the process, I hope to get you thinking about how you approach integration requirements in the future, challenging some of the pre-conceptions around this.

Before we get into creating the Flow itself…

…you will need to create a table within your SQL database to store the requisite data. This script should do the trick:

CREATE TABLE [dbo].[SendGridStatistic]
(
	[SendGridStatisticUID] [uniqueidentifier] NULL DEFAULT NEWID(),
	[Date] DATE NOT NULL,
	[CategoryName] VARCHAR(50) NULL,
	[Blocks] FLOAT NOT NULL,
	[BounceDrops] FLOAT NOT NULL,
	[Bounces] FLOAT NOT NULL,
	[Clicks] FLOAT NOT NULL,
	[Deferred] FLOAT NOT NULL,
	[Delivered] FLOAT NOT NULL,
	[InvalidEmail] FLOAT NOT NULL,
	[Opens] FLOAT NOT NULL,
	[Processed] FLOAT NOT NULL,
	[SpamReportDrops] FLOAT NOT NULL,
	[SpamReports] FLOAT NOT NULL,
	[UniqueClicks] FLOAT NOT NULL,
	[UniqueOpens] FLOAT NOT NULL,
	[UnsubscribeDrops] FLOAT NOT NULL,
	[Unsubscribes] FLOAT NOT NULL
)

A few things to point out with the above:

  • The CategoryName¬†field is only required if you are wishing to return statistic information grouped by category from the API. The example that follows primarily covers this scenario, but I will also demonstrate how to return consolidated statistic information as well if you wanted to exclude this column.
  • Microsoft Flow will only be able to map the individual statistic count values to FLOAT fields. If you attempt to use an INT, BIGINT etc. data type, then the option to map these fields will not appear. Kind of annoying, given that FLOATs are effectively “dirty”, imprecise numbers, but given the fact we are not working with decimal numbers, this shouldn’t cause any real problems.
  • The¬†SendGridStatisticUID¬†is technically optional and could be replaced by an INT/IDENTITY seed instead or removed entirely. Remember though that it is always good practice to have a unique column value for each table, to aid in individual record operations.

In addition, you will also need to ensure you have generated an API key for SendGrid that has sufficient privileges to access the Statistic information for your account.

With everything ready, we can now “flow” quite nicely into building things out. The screenshot below demonstrates how the completed Flow should look from start to finish. The sections that follow will discuss what is required for each constituent element

Recurrence

The major boon when working with Flow is the diverse options you have for triggering them – either based on certain conditions within an application or just simply based off a recurring schedule. For this example, as we will be extracting statistic information for an entire 24 period, you should ensure that the Flow executes at least once daily. The precise timing of this is up to you, but for this example, I have suggested 2 AM local time each day. The configured recurrence settings should resemble the below if done correctly:

You should be aware that when your Flow is first activated, it will execute straightaway, regardless of what settings you have configured above.

HTTP

As the SendGrid Web API is an HTTP endpoint, we can utilise the built-in HTTP connector to retrieve the information we need. This is done via a GET operation, with authentication achieved via a Raw header value containing the API key generated earlier. The tricky bit comes when building the URI and how we want the Flow to retrieve our information – namely, all statistic information covering the previous day. There is also the (optional) requirement of ensuring that statistic information is grouped by category when retrieved. Fortunately, we can get around this problem by using a bit of Expression trickery to build a dynamic URI value each time the Flow is executed. The expression code to use will depend on whether or not you require category grouping. I have provided both examples below, so simply choose the one that meets your specific requirement:

Retrieve Consolidated Statistics
concat('https://api.sendgrid.com/v3/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')))
Retrieve Statistics Grouped By Category
concat('https://api.sendgrid.com/v3/categories/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&categories=cat1&categories=cat2')

Note: For this example, statistic information would be returned only for the categories that equal cat1 & cat2. These should be updated to suit your requirements, and you can add on additional categories by extending the URI value like so: &categories=cat3&categories=cat4 etc.

Your completed HTTP component should resemble the below if done correctly. Note in particular the requirement to have Bearer and a space before specifying your API key:

Parse JSON

A successful 200 response to the Web API endpoint will return a JSON object, listing all statistic information grouped by date (and category, if used). I always struggle when it comes to working with JSON Рa symptom of working too long with relational databases I think Рand they are always challenging for me when attempting to serialize result sets. Once again, Flow comes to the rescue by providing a Parse JSON component. This was introduced with what appears to be little fanfare last year, but really proves its capabilities in this scenario. The only bit you will need to worry about is providing a sample schema so that the service can properly interpret your data. The Use sample payload to generate schema option is the surest way of achieving this, and you can use the example payloads provided on the SendGrid website to facilitate this:

Retrieve Consolidated Statistics: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/global.html

Retrieve Statistics Grouped By Category: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/categories.html

An example screenshot is provided below in case you get stuck with this:

Getting the records into the database

Here’s where things get confusing…at least for me when I was building out this flow for the first time. When you attempt to add in an¬†Insert row step to the flow and specify your input from the¬†Parse JSON step, Microsoft Flow will automatically add¬†two Apply to each step to properly handle the input. I can understand why this is the case, given that we are working with a nested JSON response, but it does provide an ample opportunity to revisit an internet meme of old…

Just pretend Xzibit is Microsoft Flow…

With the above ready and primed, you can begin to populate your Insert row step. Your first step here will more than likely be to configure your database connection settings using the + Add New Connection option:

The nicest thing about this is that you can utilise the on-premise gateway service to connect to a non-cloud database if required. Usual rules apply, regardless of where your database is located – use a minimum privileged account, configure any required IP whitelisting etc.

With your connection configured, all that’s left is to provide the name of your table and then perform a field mapping exercise from the JSON response. If you are utilising the SendGridStatisticUID¬†field, then this should be left blank to ensure that the default constraint kicks in correctly on the database side:

The Proof is in the Pudding: Testing your Flow

All that’s left now is to test your Flow. As highlighted earlier in the post, your Flow will automatically execute after being enabled, meaning that you will be in a position to determine very quickly if things are working or not. Assuming everything executes OK, you can verify that your database table resembles the below example:

This example output utilises the CategoryName¬†value, which will result in multiple data rows for each date, depending on the number of categories you are working with. This is why the¬†SendGridStatisticUID¬†is so handy for this scenario ūüôā

Conclusions or Wot I Think

When it came to delivering the requirements as set out in this posts introduction, I cannot overemphasise how much Microsoft Flow saved my bacon. My initial scoping exercise around this strongly led me towards having to develop a fully bespoke solution in code, with additional work than required to deploy this out to a dedicated environment for continuous execution. This would have surely led to:

  • Increased deployment time
  • Additional cost for the provisioning of a dedicated execution environment
  • Wasted time/effort due to bug-fixing or unforeseen errors
  • Long-term problems resulting from maintaining a custom code base and ensuring that other colleagues within the business could properly interpret the code correctly.

Instead, I was able to design, build and fully test the above solution in less than 2 hours, utilising a platform that has a wide array of documentation and online support and which, for our particular scenario, did not result in any additional cost. And this, I feel, best summarises the value that Microsoft Flow can bring to the table. It overturns many of the assumptions that you generally have to make when implementing complex integration requirements, allowing you to instead focus on delivering an effective solution quickly and neatly. And, for those who need a bit more punch due to very highly specific business requirements, then Azure Logic Apps act as the perfect meeting ground for both sides of the spectrum. The next time you find yourself staring down the abyss of a seemingly impossible integration requirement, take a look at what Microsoft Flow can offer. You just might surprise yourself.