Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂

Automated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:

  • Removes human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes.
  • Allows for clearly defined, auditable approval cycles for release approvals.
  • Provides developers with the flexibility for structuring deployments based on any predefined number of release environments.

You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:

This template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot – with no coding required at any stage:

There may also be some additional options that need configuring as part of the Azure App Service Deploy task:

  • Publish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API.
  • Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project.
  • Exclude files from the App_Data folder: Depending on what your web application is doing, it’s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this.
  • Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages.

This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:

 

Note also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.

To get around this problem, we must address the issue that the Error Code points us toward – namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:

  • Create a staging slot on Azure for your App Service, with a name of your choosing
  • Structure a deployment task that carries out the following steps, in order:
    • Stop the staging slot on your chosen App Service instance.
    • Deploy your web application to the staging slot.
    • Start the staging slot on your chosen App Service instance.
    • (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.)
    • Perform a Swap Slot, promoting the staging slot to your live, production instance.

The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:

Even with this in place, I would still recommend that you look at taking your web application “offline” in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.

Setting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement – merely some drag & drop wizardry and some fiddling about with deployment task settings. I don’t know about you, but I am pretty happy with that state of affairs. 🙂

The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that – in most typical circumstances – I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:

  • No support for control low logic (foreach loops, if statements etc.)
  • Support for only “basic” data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server.
  • Some support for the deployment of DTSX packages, but with incredibly complex deployment options.
  • Little or no support for Integrated Development Environment (IDE’s), such as Visual Studio, or other typical DevOps scenarios.

In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:

Supported now are Filter and Until conditions, alongside the expected ForEach and If conditionals.

When connecting to SQL Data destinations, we now have the ability to execute pre-copy scripts.

SSIS Integration Runtimes can now be set up from within the Azure Data Factory V2 GUI – no need to revert to PowerShell.

And finally, there is full support for storing all created resources within GitHub or Visual Studio Team Services Azure DevOps

The final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort – an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What’s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.

Although a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily.

A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a “read-only” state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail – as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:

It’s nice that the error is provided in JSON, as this can help to facilitate any error handling within your scripts.

The solution is simple – stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?

I’m sorry, but your Trigger is another castle.

The best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn’t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called ‘MyDataFactoryTrigger’:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to stop MyDataFactoryTrigger Data Factory Trigger..."
   Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -ErrorAction Stop
   Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force
   Write-Host -ForegroundColor Green "Trigger stopped successfully!"
}

Catch

{ 
    $errorMessage = $_.Exception.Message
    if($errorMessage -like '*NotFound*')
    {       
        Write-Host -ForegroundColor Yellow "Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping..."
    }

    else
    {

        throw "An error occured whilst retrieving the MyDataFactoryTrigger trigger."
    } 
}

Write-Host "Script has finished executing."

To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:

You can use pipeline Variables within arguments, which is useful if you reference the same value multiple times across your tasks.

With some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well – although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.

For completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to start MyDataFactoryTrigger Data Factory Trigger..."
   Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force -ErrorAction Stop
   Write-Host -ForegroundColor Green "Trigger started successfully!"
}

Catch

{ 
    throw "An error occured whilst starting the MyDataFactoryTrigger trigger."
}

Write-Host "Script has finished executing."

The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time…

…but you can’t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂

The very nature of how businesses or organisations operate means that the sheer volume of sensitive or confidential data that can grow over time presents a genuine challenge from a management and security point of view. Tools and applications like cloud storage, email and other information storage services can do great things; but on occasions where these are abused, such as when an employee emails out a list of business contacts to a personal email address, the penalties cannot just be a loss of reputation. Even more so with the introduction of GDPR earlier this year, there is now a clear and present danger that such actions could result in unwelcome scrutiny and also a fine for larger organisations for simply not putting the appropriate technical safeguards in place. Being able to proactively – and straightforwardly – identify & classify information types and enforce some level of control over their dissemination, while not a silver bullet in any respect, does at least demonstrate an adherence to the “appropriate technical controls” principle that GDPR in particular likes to focus on.

Azure Information Protection (AIP) seeks to address these challenges in the modern era by providing system administrators with a toolbox to enforce good internal governance and controls over documents, based on a clear classification system. The solution integrates nicely with Azure and also any on-premise environment, meaning that you don’t necessarily have to migrate your existing workloads into Office 365 to take full advantage of the service. It also offers:

  • Users the ability to track any sent document(s), find out where (and when) they have been read and revoke access at any time.
  • Full integration with the current range of Office 365 desktop clients.
  • The capability to protect non-Office documents, such as PDF’s, and requiring users to open them via a dedicated client, which checks their relevant permissions before granting access.
  • Automation capabilities via PowerShell to bulk label existing document stores, based on parameters such as files names or contents of the data (for example, mark as highly sensitive any document which contains a National Insurance Number).

Overall, I have found AIP to be a sensible and highly intuitive solution, but one that requires careful planning and configuration to realise its benefits fully. Even with this taken for granted, there is no reason why any business/organisation cannot utilise the service successfully.

However, if you are a small to medium size organisation, you may find that the Azure Information Protection offering has traditionally lacked some critical functionality.

You can see what I mean by performing a direct comparison between two flavours of Office 365 deployments – Office Business Premium (“BizPrem”) Office Professional Plus (“ProPlus”). For those who are unaware of the differences:

  • Office Business Premium is the version of Office apps that you can download with a…you guessed it…Office Business Premium subscription. This product/SKU represents the optimal choice if you are dealing with a deployment that contains less than 250 users and you want to fully utilise the whole range of features included on Office 365.
  • Office Professional Plus is the edition bundled with the following, generally more expensive subscriptions:
    • Office 365 Education A1*
    • Office 365 Enterprise E3
    • Office 365 Government G3
    • Office 365 Enterprise E4
    • Office 365 Education A4*
    • Office 365 Government G4
    • Office 365 Enterprise E5
    • Office 365 Education A5*

    For the extra price, you get a range of additional features that may be useful for large-scale IT deployments. This includes, but is not limited to, Shared Computer Activation, support for virtualised environments, group policy support and – as has traditionally been the case – an enhanced experienced while using AIP.

* In fact, these subscriptions will generally be the cheapest going on Office 365, but with the very notable caveat being that you have to be a qualifying education institute to purchase them. So no cheating I’m afraid 🙂

The salient point is that both of these Office versions support the AIP Client, the desktop application that provides the following additional button within your Office applications:

The Protect button will appear on the ribbon of Word, Excel and Outlook.

The above example, taken from an Office Business Premium deployment, differs when compared to Office Professional Plus:

Have you spotted it yet?

As mentioned in the introduction, the ability for users to explicitly set permissions on a per-document basis can be incredibly useful but is one that has been missing entirely from non-Office Business Premium subscriptions. This limitation means that users have lacked the ability to:

  • Specify (and override) the access permissions for the document – Viewer, Reviewer, Co-Author etc.
  • Assign permissions to individual users, domains or distribution/security groups.
  • Define a specified date when all access permissions will expire.

You will still be able to define organisation-level policies that determine how documents can be shared, based on a user-defined label, but you lose a high degree of personal autonomy that the solution can offer users, which – arguably – can be an essential factor in ensuring the success of the AIP deployment.

Well, the good news is, that all of this is about to change, thanks to the September 2018 General Availability wave for Azure Information Protection

This “by design” behaviour has, understandably, been a source of frustration for many, but, thanks to a UserVoice suggestion, is now no longer going to be a concern:

In the coming AIP September GA we will update the Office client requirement with the following:

“Office 365 with Office 2016 apps (minimum version 1805, build 9330.2078) when the user is assigned a license for Azure Rights Management (also known as Azure Information Protection for Office 365)”

This will allow the support of the AIP client to use protection labels in other Office subscriptions which are not ProPlus. This will require the use of Office clients which are newer then the version mentioned above and the end user should be assigned with the proper licence.

The introduction of this change was confirmed by the release of version 1.37.19.0 of the AIP client on Monday this week and is a much welcome new addition to the client. Be aware though of the requirement to be using the May 2018 build of Office 2016 apps to take advantage of this new functionality. Once overcome, this change suddenly makes the AIP offering a lot more powerful for existing small business users and a much easier sell for those who are contemplating adopting the product but cannot tolerate the cost burden associated with an Enterprise subscription. Microsoft is continually endeavouring to ensure a consistent “feedback loop” is provided for all users and customers to offer product improvement suggestions, and it is great to see this working in practice with AIP as our example. Now’s as good as time as any to evaluate AIP if you haven’t before.

Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

Once upon a time, there was a new cloud service known as Windows Azure. Over time, this cloud service developed with new features, became known more generally as just Azure, embraced the unthinkable from a technology standpoint and also went through a complete platform overhaul. Longstanding Azure users will remember the “classic” portal, with its very…distinctive…user interface:

Image courtesy of Microsoft

As the range of different services offered on Azure increased and the call for more efficient management tools became almost deafening, Microsoft announced the introduction of a new portal experience and Resource Group Management for Azure resources, both of which are now the de facto means of interacting with Azure today. The old style portal indicated above was officially discontinued earlier this year. In line with these changes, Microsoft introduced new, Resource Manager compatible versions of pretty much every major service available on the “classic” portal…with some notable exceptions. The following “classic” resources can still be created and worked with today using the new Azure portal:

This provides accommodation for those who are still operating compute resources dating back to the days of yonder, allowing you to create and manage resources that may be needed to ensure the continued success of your existing application deployment. In most cases, you will not want to create these “classic” resources as part of new project work, as the equivalent Resource Manager options should be more than sufficient for your needs. The only question mark around this concerns Cloud Services. There is no equivalent Resource Manager resource available currently, with the recommended option for new deployments being Azure Service Fabric instead. Based on my research online, there appears to be quite a feature breadth between both offerings, with Azure Service Fabric arguably being overkill for more simplistic requirements. There also appears to be some uncertainty over whether Cloud Services are technically considered deprecated or not. I would highly recommend reading Andreas Helland’s blog post on the subject and form your own opinion from there.

For both experiences, Microsoft provided a full set of automation tools in PowerShell to help developers carry out common tasks on the Azure Portal. These are split out into the standard Azure cmdlets for the “classic” experience and a set of AzureRM cmdlets for the new Resource Management approach. Although the “classic” Azure resource cmdlets are still available and supported, they very much operate in isolation – that is, if you have a requirement to interchangeably create “classic” and Resource Manager resources as part of the same script file, then you are going to encounter some major difficulties and errors. One example of this is that the ability to switch subscriptions that you have access, but not ownership, to becomes nigh on impossible to achieve. For this reason, I would recommend utilising AzureRM cmdlets solely if you ever have a requirement to create classic resources to maintain an existing deployment. To help accommodate this scenario, the New-AzureRmResource cmdlet really becomes your best friend. In a nutshell, it lets you create any Azure Resource of your choosing when executed. The catch around using it is that the exact syntax to utilise as part of the -ResourceType parameter can take some time to discover, particularly in the case of working with “classic” resources. What follows are some code snippets that, hopefully, provide you with a working set of cmdlets to create the “classic” resources highlighted in the screenshot above.

Before you begin…

To use any of the cmdlets that follow, make sure you have connected to Azure, selected your target subscription and have a Resource Group created to store your resources using the cmdlets below. You can obtain your Subscription ID by navigating to its properties within the Azure portal:

#Replace the parameter values below to suit your requirements

$subscriptionID = 36ef0d35-2775-40f7-b3a1-970a4c23eca2
$rgName = 'MyResourceGroup'
$location = 'UK South'

Set-ExecutionPolicy Unrestricted
Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId $subscriptionID
#Create Resource Group
New-AzureRMResourceGroup -Name $rgName -Location $location

With this done, you should hopefully encounter no problems executing the cmdlets that follow.

Cloud Services (classic)

#Create an empty Cloud Service (classic) resource in MyResourceGroup in the UK South region

New-AzureRmResource -ResourceName 'MyClassicCloudService' -ResourceGroupName $rgName `
                    -ResourceType 'Microsoft.ClassicCompute/domainNames' -Location $location -Force
                    

Disks (classic)

#Create a Disk (classic) resource using a Linux operating system in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyClassicDisk' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageaccounts/disks' ` 
                    -Location $location `
                    -PropertyObject @{'DiskName'='MyClassicDisk' 
                    'Label'='My Classic Disk' 
                    'VhdUri'='https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'
                    'OperatingSystem' = 'Linux'
                    } -Force
                    

Network Security groups (classic)

#Create a Network Security Group (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyNSG' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/networkSecurityGroups' `
                    -Location $location -Force
                    

Reserved IP Addresses (classic)

#Create a Reserved IP (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyReservedIP' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/reservedIps' `
                    -Location $location -Force
                    

Storage Accounts (classic)

#Create a Storage Account (classic) resource in MyResourceGroup in the UK South region.
#Storage account with use Standard Locally Redundant Storage

New-AzureRmResource -ResourceName 'MyStorageAccount' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/StorageAccounts' ` 
                    -Location $location -PropertyObject @{'AccountType' = 'Standard-LRS'} -Force
                    

Virtual Networks (classic)

#Create a Virtual Network (classic) resource in MyResourceGroup in the UK South Region

New-AzureRmResource -ResourceName 'MyVNET' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/virtualNetworks' `
                    -Location $location
                    -PropertyObject @{'AddressSpace' = @{'AddressPrefixes' = '10.0.0.0/16'}
                                      'Subnets' = @{'name' = 'MySubnet'
                                                    'AddressPrefix' = '10.0.0.0/24'
                                                    }
                                     }
                                     

VM Images (classic)

#Create a VM image (classic) resource in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyVMImage' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageAccounts/vmImages' `
                    -Location $location `
                    -PropertyObject @{'Label' = 'MyVMImage Label'
                    'Description' = 'MyVMImage Description'
                    'OperatingSystemDisk' = @{'OsState' = 'Specialized'
                                              'Caching' = 'ReadOnly'
                                              'OperatingSytem' = 'Windows'
                                              'VhdUri' = 'https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'}
                    }

Conclusions or Wot I Think

The requirement to work with the cmdlets shown in this post should only really be a concern for those who are maintaining “classic” resources as part of an ongoing deployment. It is therefore important to emphasise not to use these cmdlets to create resources for new projects. Alongside the additional complexity involved in constructing the New-AzureRmResource cmdlet, there is an abundance of new, updated AzureRM cmdlets at your disposal that enables you to more intuitively create the correct types of resources. The key benefit that these examples provide is the ability to use a single Azure PowerShell module for the management of your entire Azure estate, as opposed to having to switch back and forth between different modules. It is perhaps a testament to how flexible Azure is that cmdlets like the New-AzureRmResource exist in the first place, ultimately enabling anybody to fine-tune deployment and maintenance scripts to suit any conceivable situation.