Welcome to post number nine in my series designed to provide a revision tool for Microsoft Exam 70-778, and for those looking to increase their expertise in Power BI. The topics we have covered so far in the series have all involved Power BI Desktop primarily, and we now move away from this as we evaluate how to Manage Custom Reporting Solutions with Power BI. This focus area for the exam measures the following skill areas:

Configure and access Microsoft Power BI Embedded; enable developers to create and edit reports through custom applications; enable developers to embed reports in applications; use the Power BI API to push data into a Power BI dataset; enable developers to create custom visual.

Despite being, at first glance, a very technically focused area, it is not necessarily a requirement for the exam to know how to work with these features in-depth. However, what this post will try to do is fully explain what Power BI Embedded is (and how it can be tailored accordingly), the capabilities and benefits of the Power BI API and, finally, what options you have available to build custom visualizations, that are then available for use across any Power BI Report.

Power BI Embedded

A potential limitation of using Power BI as your Business Intelligence solution is that you must access your reporting solution through one of the two interfaces, depending on how you have licensed the product:

For strictly organisational only access, this is all fine and dandy; but if you desire to grant external users access to your reports, it would be necessary to open up a door into a critical component of your IT infrastructure, often in tandem with any other systems your solution may contain. For example, if you have developed a support portal for your customers to submit cases with and wish to provide them with a range of Power BI visualizations, you would need to grant and deploy access to two, separate application instances – your support portal and Power BI Online/Report Server. This can lead to a jarring end-user experience and severely limit your capabilities in providing a unified, bespoke and branded solution.

Power BI Embedded seeks to address this scenario by providing application developers the means to embed Power BI reports and visualizations directly within their existing applications. The experience this offers is seamless, to the extent that end-users will not even need to know that you are using Power BI at all. Consequently, this potentially allows you to look exceptionally gifted when you begin to deploy engaging and highly interactive visualizations into your application quickly. As an Azure-based service with a pricing mechanism to suit, you only need to suitably estimate your potential compute capacities, deploy your capacity and any corresponding reports and then build out the appropriate link to your Power BI content within your application.

To get started with using Power BI Embedded, you need to make sure you have the following at your disposal:

To get a feel for the capabilities on offer as part of this offering, you can go to the Power BI Embedded Playground, made available courtesy of Microsoft. This tool allows you to test how the various Power BI Embedded components render themselves, tweak around with their appearance and generate working code samples that are usable within your application. The screenshot below shows an example of how a single Report visual would look when embedding it into a bespoke application:

As the screenshot indicates, there is no loss in core functionality when consuming Power BI in this manner. You can hover over individual areas to gain insights into your data; you can drill-down through the data; data is sortable in the conventional manner; and, finally, you can view the underlying data in the visualization or even export it out into an Excel/CSV document. Also, you have extensive options available that can be used to modify how a visual, report etc. is rendered on your application page, allowing you to ensure that all rendering completes most optimally for your application.

All in all, Power BI Embedded represents a significant boon for application developers, enabling them to very quickly leverage the extensive reporting capabilities Power BI  provides, all of which is cloud-based, highly scalable and minutely tailorable. It is important to highlight that all of this goodness comes with a potentially high cost, namely, that of requiring a sufficiently proficient application developer (preferably .NET focused) to join all of the various bits together. But, if you are already in the position where you have developed an extensive range of Power BI reports for consumption by your customer base, Power BI Embedded is the natural progression point in turning your solution into a real piece of intellectual property.

The Power BI API

If you are finding your feet with Power BI Embedded and need to look at carrying out more complex actions against Power BI content that is pulling through from a workspace, then the API is an area that you will need to gain familiarity in working with too. Microsoft exposes a whole range of functionality as part of the Power BI API, that can assist in a wide variety of tasks – such as automation, deployment and allowing any bespoke application to further leverage benefits out of their Power BI embedded solution. Some examples of the types of things you can do with the API include:

The API requires that you authenticate against the Power BI service, using a corresponding Application Registration on Azure Active Directory, which defines the list of privileges that can be carried out. This component can be straightforwardly created using the wizard provided by Microsoft, and a full tutorial is also available on how to generate an access token from your newly created Application Registration. The key thing as part of all of this is to ensure that your Application Registration is scoped for only the permissions you require (these can be changed in future if needed) and not to grant all permissions needlessly.

Because the API is a REST endpoint, there are a variety of programming languages or tools that you can use from an interaction standpoint. PowerShell is an especially good candidate for this and, in the snippet below, you can see how this can be used to modify the User Name and Password for a SQL Server data source deployed onto Power BI Online:

#Make the request to patch the Data Source with the updated credentials

$sqluser = "MyUserName"
$sqlPass = "P@ssw0rd!"

$patchURI = "https://api.powerbi.com/v1.0/myorg/gateways/cfafbeb1-8037-4d0c-896e-a46fb27ff229/datasources/1e8176ec-b01c-4a34-afad-e001ce1a28dd/"
$person = @{
    credentialType='Basic'
    basicCredentials=@{
        username=$sqluser
        password=$sqlpass
        }
}
$personJSON = $person | ConvertTo-Json
$request3 = Invoke-RestMethod -Uri $patchURI -Headers $authHeader -Method PATCH -Verbose -Body $personJSON

This example deliberately excludes some of the previous steps needed to authenticate with Power BI and is, therefore, provided for strictly illustrative purposes only.

Creating Custom Visuals

Developers have access to primarily two options when it comes to building out bespoke visualizations, which are then includable in a Power BI Online, Embedded and Report Server report:

Last week’s post discussed this topic in more detail from an exam standpoint which, in a nutshell, only requires you to have a general awareness of the options available here; no need to start extensively learning a new programming language, unless you really want to 🙂

Key Takeaways

  • Power BI Embedded is an Azure hosted offering that allows you add Power BI Report content into bespoke applications. This deployment option can be incredibly useful if you wish to make available your Power BI solution to users outside of your organisation or if you have an existing, bespoke application system that can benefit from utilising Power BI content. An Azure subscription is required to begin working with Power BI Embedded and you are billed based on node size, not individual user licenses. All Power BI content requires publishing to the online service before its contents become available for Power BI Embedded to access. Report developers will, therefore, need granting a Power BI Professional license to carry out these activities.
  • The Power BI API grants access to developers to perform automation or administrative actions programmatically against the Power BI Online service. Utilising a REST API, developers can determine the optimal programming language of choice to interact with the API, allowing them to streamline the deployment of Reports or Dashboards to the Power BI service or leverage additional functionality when utilising Power BI Embedded. The API can also cater to specific data load requirements, although more complex needs in this area would require addressing via alternate means (SSIS, Azure Data Factory etc.)
  • Developers can add their own bespoke visualizations to a Power BI Report by either developing them using Node.js or using the R language. The first of these options facilitate a more streamlined deployment mechanism and allows developers to add their visualizations to AppSource, whereas the second option may be more useful for complex visualization types with an analytical or statistical function.

Compared to the other exam topics, a general awareness of these concepts is more than likely sufficient from a learning perspective and is (arguably) useful knowledge in any event, as it allows you to understand how developers can further extend a Power BI solution to suit a particular business need. In next weeks post, we will move into the final subject area for the exam, as the focus shifts towards how to work with Power BI outside of the Desktop application and the various tools available to integrate on-premise data sources into Power BI Online.

Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂

Automated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:

  • Removes human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes.
  • Allows for clearly defined, auditable approval cycles for release approvals.
  • Provides developers with the flexibility for structuring deployments based on any predefined number of release environments.

You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:

This template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot – with no coding required at any stage:

There may also be some additional options that need configuring as part of the Azure App Service Deploy task:

  • Publish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API.
  • Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project.
  • Exclude files from the App_Data folder: Depending on what your web application is doing, it’s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this.
  • Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages.

This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:

 

Note also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.

To get around this problem, we must address the issue that the Error Code points us toward – namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:

  • Create a staging slot on Azure for your App Service, with a name of your choosing
  • Structure a deployment task that carries out the following steps, in order:
    • Stop the staging slot on your chosen App Service instance.
    • Deploy your web application to the staging slot.
    • Start the staging slot on your chosen App Service instance.
    • (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.)
    • Perform a Swap Slot, promoting the staging slot to your live, production instance.

The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:

Even with this in place, I would still recommend that you look at taking your web application “offline” in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.

Setting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement – merely some drag & drop wizardry and some fiddling about with deployment task settings. I don’t know about you, but I am pretty happy with that state of affairs. 🙂

The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that – in most typical circumstances – I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:

  • No support for control low logic (foreach loops, if statements etc.)
  • Support for only “basic” data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server.
  • Some support for the deployment of DTSX packages, but with incredibly complex deployment options.
  • Little or no support for Integrated Development Environment (IDE’s), such as Visual Studio, or other typical DevOps scenarios.

In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:

Supported now are Filter and Until conditions, alongside the expected ForEach and If conditionals.

When connecting to SQL Data destinations, we now have the ability to execute pre-copy scripts.

SSIS Integration Runtimes can now be set up from within the Azure Data Factory V2 GUI – no need to revert to PowerShell.

And finally, there is full support for storing all created resources within GitHub or Visual Studio Team Services Azure DevOps

The final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort – an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What’s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.

Although a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily.

A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a “read-only” state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail – as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:

It’s nice that the error is provided in JSON, as this can help to facilitate any error handling within your scripts.

The solution is simple – stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?

I’m sorry, but your Trigger is another castle.

The best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn’t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called ‘MyDataFactoryTrigger’:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to stop MyDataFactoryTrigger Data Factory Trigger..."
   Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -ErrorAction Stop
   Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force
   Write-Host -ForegroundColor Green "Trigger stopped successfully!"
}

Catch

{ 
    $errorMessage = $_.Exception.Message
    if($errorMessage -like '*NotFound*')
    {       
        Write-Host -ForegroundColor Yellow "Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping..."
    }

    else
    {

        throw "An error occured whilst retrieving the MyDataFactoryTrigger trigger."
    } 
}

Write-Host "Script has finished executing."

To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:

You can use pipeline Variables within arguments, which is useful if you reference the same value multiple times across your tasks.

With some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well – although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.

For completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to start MyDataFactoryTrigger Data Factory Trigger..."
   Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force -ErrorAction Stop
   Write-Host -ForegroundColor Green "Trigger started successfully!"
}

Catch

{ 
    throw "An error occured whilst starting the MyDataFactoryTrigger trigger."
}

Write-Host "Script has finished executing."

The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time…

…but you can’t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂

The very nature of how businesses or organisations operate means that the sheer volume of sensitive or confidential data that can grow over time presents a genuine challenge from a management and security point of view. Tools and applications like cloud storage, email and other information storage services can do great things; but on occasions where these are abused, such as when an employee emails out a list of business contacts to a personal email address, the penalties cannot just be a loss of reputation. Even more so with the introduction of GDPR earlier this year, there is now a clear and present danger that such actions could result in unwelcome scrutiny and also a fine for larger organisations for simply not putting the appropriate technical safeguards in place. Being able to proactively – and straightforwardly – identify & classify information types and enforce some level of control over their dissemination, while not a silver bullet in any respect, does at least demonstrate an adherence to the “appropriate technical controls” principle that GDPR in particular likes to focus on.

Azure Information Protection (AIP) seeks to address these challenges in the modern era by providing system administrators with a toolbox to enforce good internal governance and controls over documents, based on a clear classification system. The solution integrates nicely with Azure and also any on-premise environment, meaning that you don’t necessarily have to migrate your existing workloads into Office 365 to take full advantage of the service. It also offers:

  • Users the ability to track any sent document(s), find out where (and when) they have been read and revoke access at any time.
  • Full integration with the current range of Office 365 desktop clients.
  • The capability to protect non-Office documents, such as PDF’s, and requiring users to open them via a dedicated client, which checks their relevant permissions before granting access.
  • Automation capabilities via PowerShell to bulk label existing document stores, based on parameters such as files names or contents of the data (for example, mark as highly sensitive any document which contains a National Insurance Number).

Overall, I have found AIP to be a sensible and highly intuitive solution, but one that requires careful planning and configuration to realise its benefits fully. Even with this taken for granted, there is no reason why any business/organisation cannot utilise the service successfully.

However, if you are a small to medium size organisation, you may find that the Azure Information Protection offering has traditionally lacked some critical functionality.

You can see what I mean by performing a direct comparison between two flavours of Office 365 deployments – Office Business Premium (“BizPrem”) Office Professional Plus (“ProPlus”). For those who are unaware of the differences:

  • Office Business Premium is the version of Office apps that you can download with a…you guessed it…Office Business Premium subscription. This product/SKU represents the optimal choice if you are dealing with a deployment that contains less than 250 users and you want to fully utilise the whole range of features included on Office 365.
  • Office Professional Plus is the edition bundled with the following, generally more expensive subscriptions:
    • Office 365 Education A1*
    • Office 365 Enterprise E3
    • Office 365 Government G3
    • Office 365 Enterprise E4
    • Office 365 Education A4*
    • Office 365 Government G4
    • Office 365 Enterprise E5
    • Office 365 Education A5*

    For the extra price, you get a range of additional features that may be useful for large-scale IT deployments. This includes, but is not limited to, Shared Computer Activation, support for virtualised environments, group policy support and – as has traditionally been the case – an enhanced experienced while using AIP.

* In fact, these subscriptions will generally be the cheapest going on Office 365, but with the very notable caveat being that you have to be a qualifying education institute to purchase them. So no cheating I’m afraid 🙂

The salient point is that both of these Office versions support the AIP Client, the desktop application that provides the following additional button within your Office applications:

The Protect button will appear on the ribbon of Word, Excel and Outlook.

The above example, taken from an Office Business Premium deployment, differs when compared to Office Professional Plus:

Have you spotted it yet?

As mentioned in the introduction, the ability for users to explicitly set permissions on a per-document basis can be incredibly useful but is one that has been missing entirely from non-Office Business Premium subscriptions. This limitation means that users have lacked the ability to:

  • Specify (and override) the access permissions for the document – Viewer, Reviewer, Co-Author etc.
  • Assign permissions to individual users, domains or distribution/security groups.
  • Define a specified date when all access permissions will expire.

You will still be able to define organisation-level policies that determine how documents can be shared, based on a user-defined label, but you lose a high degree of personal autonomy that the solution can offer users, which – arguably – can be an essential factor in ensuring the success of the AIP deployment.

Well, the good news is, that all of this is about to change, thanks to the September 2018 General Availability wave for Azure Information Protection

This “by design” behaviour has, understandably, been a source of frustration for many, but, thanks to a UserVoice suggestion, is now no longer going to be a concern:

In the coming AIP September GA we will update the Office client requirement with the following:

“Office 365 with Office 2016 apps (minimum version 1805, build 9330.2078) when the user is assigned a license for Azure Rights Management (also known as Azure Information Protection for Office 365)”

This will allow the support of the AIP client to use protection labels in other Office subscriptions which are not ProPlus. This will require the use of Office clients which are newer then the version mentioned above and the end user should be assigned with the proper licence.

The introduction of this change was confirmed by the release of version 1.37.19.0 of the AIP client on Monday this week and is a much welcome new addition to the client. Be aware though of the requirement to be using the May 2018 build of Office 2016 apps to take advantage of this new functionality. Once overcome, this change suddenly makes the AIP offering a lot more powerful for existing small business users and a much easier sell for those who are contemplating adopting the product but cannot tolerate the cost burden associated with an Enterprise subscription. Microsoft is continually endeavouring to ensure a consistent “feedback loop” is provided for all users and customers to offer product improvement suggestions, and it is great to see this working in practice with AIP as our example. Now’s as good as time as any to evaluate AIP if you haven’t before.

Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.