Earlier this month, a colleague escalated an issue to me involving Dynamics CRM/365 Customer Engagement (CRM/D365CE), specifically relating to email tracking. This feature is by far one of the most useful and unwieldy within the application, if not configured correctly. In days of yore, the setup steps involved could be tedious to implement, mainly if you were operating within the confines of a hybrid environment (for example, CRM 2015 on-premises and Exchange Server Online). Or, you could have been one of the handful of unfortunate individuals on the planet today that had to rely on the abomination that is the Email Router. We can be thankful today that Server-Side Synchronization is the sole method for pulling in emails from any manner of SMTP or POP3 mail servers; although note that only Exchange mailboxes support Appointment, Contact & Task synchronisation. Lucky though we are to be living in more enlightened times, careful attention and management of Server-Side Synchronization deployments is still an ongoing requirement. This is primarily to ensure all mailboxes operate as intended and – most critically – to ensure that only the most relevant emails are tagged back into the application, and not instead a deluge of unrelated correspondence.

Going back to the issue mentioned at the start of this post – a user in question was having a problem with certain emails not synchronising automatically back into the application, even though the emails in question had a corresponding Contact record within CRM/D365CE. We were also able to observe that other emails sent from the user to the Contact record(s) in question were being tagged back without issue. When first diagnosing problems like this, you can forgive yourself for not straight away making a beeline to the user’s Mailbox record within the application to verify that:

  • The Mailbox is enabled for Server-Side Synchronization for Incoming/Outgoing Email.
  • No processing errors are occurring that could be preventing emails from being successfully handled by the application.

These options can be accessed from the System Settings area of the application, on the Email tab, and define the default settings for all newly created users.

Likewise, these details can are accessible from the Mailbox record for the user concerned.

Although not likely (more often than not) to be the cause of any mail flow issues, it is worthwhile not to potentially overcomplicate a technical issue at the first juncture by overlooking anything obvious. 🙂

As we can see in this example, there are no problems with the over-arching Server-Side Synchronization configuration, nor are there any problems with the individual mailbox. It is at this point that we must refer to the following screen that all users in the application have access to via the gear icon at the top of the screen – the User Options screen:

The Track option allows users to specify how CRM/D365CE handles automatic email tracking, based on four options:

  • All Email Messages: Does exactly what it says on the tin, and is not recommended to leave on as default, for the reasons I alluded to earlier.
  • Email messages in response to Dynamics 365 Email: Only emails sent from within Dynamics 365 (or tracked accordingly via Outlook) will be stored in the application, alongside any replies that are received.
  • Email messages from Dynamics 365 Leads, Contacts and Accounts: Only emails which match back to the record types listed, based on email address, will be stored within the application.
  • Email messages from Dynamics 365 records that are email enabled: The same as the previous option, but expanded out to include all record types that are configured with the Sending email… option on the Entity configuration page.

For the user who was having email tracking issues, the default setting specified was Email messages in response to Dynamics 365 Email. So, to resolve the issue, it is necessary for the user to update their settings to either the 3rd or 4th option.

Any situation that involves detailed, technical configuration by end-users are generally ones that I like to avoid – for a few simple, business-relevant reasons:

  • IT/Technical teams should be the ones making configuration changes to applications, not end users who have not had training or experience on the steps they are being asked to follow.
  • End-users are busy, and it is always essential that we are conscious of their time and in making any interaction short and positive as opposed to long and arduous.
  • If the above instructions are relayed over the telephone, as opposed to in-person, then the propensity for mistakes to occur rises significantly.

However, from what we have seen so far, it will be necessary to access the application as the user to make the change – either by taking control of their session or by (perish the thought) relaying user credentials to enable someone in IT support to make the configuration change. Don’t EVER do this option by the way! Fortunately, there is a better way of updating user profile settings, using a tool whose importance has been highlighted in no uncertain terms previously on the blogthe XrmToolbox. One of the handiest out of the box tools that this provides is the User Settings Utility which…well…see for yourself:

As a consequence, application administrators can “magically” modify any of the settings contained within the User Options page, including – as we can see below – the Track email messages setting:

With a few clicks, the appropriate changes can be applied not just to a single user, but to everyone within the application – avoiding any potential end-user confusion and making our jobs easier. This simple fact is another reason why you should immediately launch the XrmToolBox whenever you find yourself with a CRM/D365CE issue that stumps you and why the community tools available for the application are top-notch.

The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that – in most typical circumstances – I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:

  • No support for control low logic (foreach loops, if statements etc.)
  • Support for only “basic” data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server.
  • Some support for the deployment of DTSX packages, but with incredibly complex deployment options.
  • Little or no support for Integrated Development Environment (IDE’s), such as Visual Studio, or other typical DevOps scenarios.

In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:

Supported now are Filter and Until conditions, alongside the expected ForEach and If conditionals.

When connecting to SQL Data destinations, we now have the ability to execute pre-copy scripts.

SSIS Integration Runtimes can now be set up from within the Azure Data Factory V2 GUI – no need to revert to PowerShell.

And finally, there is full support for storing all created resources within GitHub or Visual Studio Team Services Azure DevOps

The final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort – an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What’s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.

Although a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily.

A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a “read-only” state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail – as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:

It’s nice that the error is provided in JSON, as this can help to facilitate any error handling within your scripts.

The solution is simple – stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?

I’m sorry, but your Trigger is another castle.

The best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn’t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called ‘MyDataFactoryTrigger’:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to stop MyDataFactoryTrigger Data Factory Trigger..."
   Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -ErrorAction Stop
   Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force
   Write-Host -ForegroundColor Green "Trigger stopped successfully!"
}

Catch

{ 
    $errorMessage = $_.Exception.Message
    if($errorMessage -like '*NotFound*')
    {       
        Write-Host -ForegroundColor Yellow "Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping..."
    }

    else
    {

        throw "An error occured whilst retrieving the MyDataFactoryTrigger trigger."
    } 
}

Write-Host "Script has finished executing."

To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:

You can use pipeline Variables within arguments, which is useful if you reference the same value multiple times across your tasks.

With some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well – although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.

For completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:

param($rgName, $dfName)

Try
{
   Write-Host "Attempting to start MyDataFactoryTrigger Data Factory Trigger..."
   Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName 'MyDataFactoryTrigger' -Force -ErrorAction Stop
   Write-Host -ForegroundColor Green "Trigger started successfully!"
}

Catch

{ 
    throw "An error occured whilst starting the MyDataFactoryTrigger trigger."
}

Write-Host "Script has finished executing."

The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time…

…but you can’t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂

The very nature of how businesses or organisations operate means that the sheer volume of sensitive or confidential data that can grow over time presents a genuine challenge from a management and security point of view. Tools and applications like cloud storage, email and other information storage services can do great things; but on occasions where these are abused, such as when an employee emails out a list of business contacts to a personal email address, the penalties cannot just be a loss of reputation. Even more so with the introduction of GDPR earlier this year, there is now a clear and present danger that such actions could result in unwelcome scrutiny and also a fine for larger organisations for simply not putting the appropriate technical safeguards in place. Being able to proactively – and straightforwardly – identify & classify information types and enforce some level of control over their dissemination, while not a silver bullet in any respect, does at least demonstrate an adherence to the “appropriate technical controls” principle that GDPR in particular likes to focus on.

Azure Information Protection (AIP) seeks to address these challenges in the modern era by providing system administrators with a toolbox to enforce good internal governance and controls over documents, based on a clear classification system. The solution integrates nicely with Azure and also any on-premise environment, meaning that you don’t necessarily have to migrate your existing workloads into Office 365 to take full advantage of the service. It also offers:

  • Users the ability to track any sent document(s), find out where (and when) they have been read and revoke access at any time.
  • Full integration with the current range of Office 365 desktop clients.
  • The capability to protect non-Office documents, such as PDF’s, and requiring users to open them via a dedicated client, which checks their relevant permissions before granting access.
  • Automation capabilities via PowerShell to bulk label existing document stores, based on parameters such as files names or contents of the data (for example, mark as highly sensitive any document which contains a National Insurance Number).

Overall, I have found AIP to be a sensible and highly intuitive solution, but one that requires careful planning and configuration to realise its benefits fully. Even with this taken for granted, there is no reason why any business/organisation cannot utilise the service successfully.

However, if you are a small to medium size organisation, you may find that the Azure Information Protection offering has traditionally lacked some critical functionality.

You can see what I mean by performing a direct comparison between two flavours of Office 365 deployments – Office Business Premium (“BizPrem”) Office Professional Plus (“ProPlus”). For those who are unaware of the differences:

  • Office Business Premium is the version of Office apps that you can download with a…you guessed it…Office Business Premium subscription. This product/SKU represents the optimal choice if you are dealing with a deployment that contains less than 250 users and you want to fully utilise the whole range of features included on Office 365.
  • Office Professional Plus is the edition bundled with the following, generally more expensive subscriptions:
    • Office 365 Education A1*
    • Office 365 Enterprise E3
    • Office 365 Government G3
    • Office 365 Enterprise E4
    • Office 365 Education A4*
    • Office 365 Government G4
    • Office 365 Enterprise E5
    • Office 365 Education A5*

    For the extra price, you get a range of additional features that may be useful for large-scale IT deployments. This includes, but is not limited to, Shared Computer Activation, support for virtualised environments, group policy support and – as has traditionally been the case – an enhanced experienced while using AIP.

* In fact, these subscriptions will generally be the cheapest going on Office 365, but with the very notable caveat being that you have to be a qualifying education institute to purchase them. So no cheating I’m afraid 🙂

The salient point is that both of these Office versions support the AIP Client, the desktop application that provides the following additional button within your Office applications:

The Protect button will appear on the ribbon of Word, Excel and Outlook.

The above example, taken from an Office Business Premium deployment, differs when compared to Office Professional Plus:

Have you spotted it yet?

As mentioned in the introduction, the ability for users to explicitly set permissions on a per-document basis can be incredibly useful but is one that has been missing entirely from non-Office Business Premium subscriptions. This limitation means that users have lacked the ability to:

  • Specify (and override) the access permissions for the document – Viewer, Reviewer, Co-Author etc.
  • Assign permissions to individual users, domains or distribution/security groups.
  • Define a specified date when all access permissions will expire.

You will still be able to define organisation-level policies that determine how documents can be shared, based on a user-defined label, but you lose a high degree of personal autonomy that the solution can offer users, which – arguably – can be an essential factor in ensuring the success of the AIP deployment.

Well, the good news is, that all of this is about to change, thanks to the September 2018 General Availability wave for Azure Information Protection

This “by design” behaviour has, understandably, been a source of frustration for many, but, thanks to a UserVoice suggestion, is now no longer going to be a concern:

In the coming AIP September GA we will update the Office client requirement with the following:

“Office 365 with Office 2016 apps (minimum version 1805, build 9330.2078) when the user is assigned a license for Azure Rights Management (also known as Azure Information Protection for Office 365)”

This will allow the support of the AIP client to use protection labels in other Office subscriptions which are not ProPlus. This will require the use of Office clients which are newer then the version mentioned above and the end user should be assigned with the proper licence.

The introduction of this change was confirmed by the release of version 1.37.19.0 of the AIP client on Monday this week and is a much welcome new addition to the client. Be aware though of the requirement to be using the May 2018 build of Office 2016 apps to take advantage of this new functionality. Once overcome, this change suddenly makes the AIP offering a lot more powerful for existing small business users and a much easier sell for those who are contemplating adopting the product but cannot tolerate the cost burden associated with an Enterprise subscription. Microsoft is continually endeavouring to ensure a consistent “feedback loop” is provided for all users and customers to offer product improvement suggestions, and it is great to see this working in practice with AIP as our example. Now’s as good as time as any to evaluate AIP if you haven’t before.

Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

When it comes to handling large datasets in a formal, structured and highly scalable manner, nothing beats SQL Server. Having worked with the product for almost six years, I always look forward to the opportunity of putting together some SQL queries or to build out a new database. I think of it as a nice little treat, a reward amongst the midst of other, sometimes tedious, tasks that I have to deal with on a weekly basis. I rank knowledge of SQL Server pretty highly if you are aiming to become a serious Dynamics CRM/365 for Customer Engagement professional and I credit my experience with SQL as one of the things that helped to streamline my journey into Dynamics.

It may be, however, that others are not as keen at the prospect of working with SQL databases, particularly when it is unable to accommodate some of the alternative data storage mechanisms that are commonplace with application developers. A good example of this is JSON (JavaScript Object Notation), a format that is used widely today as a portable and “easy to read” mechanism of transferring data. For someone who is more used to working with SQL, getting your head around JSON can be a bit of a challenge at first and – traditionally – was not something that Microsoft factored into the design of SQL Server. A lot has changed with Microsoft – to the extent that services such as Microsoft Flow and Azure now use JSON extensively – and with SQL Server, as a whole host of related functions were added to SQL Server 2014 to provide straightforward conversions into JSON. The FOR JSON PATH clause is the primary gateway into this world and is a function which I have slowly, but surely, started to get my head around. What I wanted to do in this week’s blog post was provide a “crash course” on how to use this nifty piece of functionality, hopefully with the aim of giving you everything you need to start utilising it in your environment(s).

Before we begin…

To best illustrate how the clause works in practice, it is necessary to create an appropriate set of inter-related tables within SQL Server, that will be used for all examples that follow. Here’s one I (rather unimaginatively) made earlier:

The code snippets to create them can be found below:

CREATE TABLE dbo.[ParentTable]
(
	ParentTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[ChildTable1]
(
	ChildTable1ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

CREATE TABLE dbo.[ChildTable2]
(
	ChildTable2ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[GrandchildTable]
(
	GrandchildTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ChildTable2ID INT FOREIGN KEY REFERENCES dbo.[ChildTable2](ChildTable2ID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

The table structures are incredibly basic, but note, in particular, the FOREIGN KEY relationships from the 2 Child Tables to the Parent and also the additional parent/child relationship between the GrandchildTable and ChildTable2. You will also need to look at inserting some test data into the tables to properly follow through the rest of this post.

With our environment prepped, let’s take a look at the different ways we can convert our dataset into JSON format, with minimal effort involved.

Example 1: FOR JSON AUTO

If we were to look at doing a straightforward SELECT * query on all our tables, our query and expected output might look something like this:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID

Our main issue with this query is that, because of how T-SQL works, the 2 ParentTable records are returned for every child and grandchild record. For a client application, this can be somewhat cumbersome to handle. FOR JSON AUTO can be straightforwardly added to the end of the above query to convert the query output accordingly:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON AUTO

//Example output of the first 25 lines below:

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": "Test      ",
    "Field3": "This is a test record",
    "CT1": [{
        "ChildTable1ID": 1,
        "ParentTableID": 1,
        "Field1": true,
        "Field2": "Test      ",
        "Field3": "This is a test record",
        "CT2": [{
            "ChildTable2ID": 1,
            "ParentTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record",
            "GT": [{
                "GrandchildTableID": 1,
                "ChildTable2ID": 1,
                "Field1": false,
                "Field2": "Test      ",
                "Field3": "This is a test record"
            }]
       ...

This output provides a much more sensible structure, with no record duplication and proper nesting of child records.

Example 2: FOR JSON PATH, ROOT

With some modifications to the above query, it is also possible to specify names for each root element. This can be tailored depending on your specific requirements. For example, let’s say we had to provide the following root element names for each of the example tables:

  • dbo.ParentTable -> Parent
  • dbo.ChildTable1 -> FirstChildTable
  • dbo.ChildTable2 -> SecondChildTable

The following query would achieve these requirements, in addition to adding a master root element name of MyTestSQLJSONObject:

SELECT PT.ParentTableID AS [Parent.ParentTableID], PT.Field1 AS [Parent.Field1], PT.Field2 AS [Parent.Field2], PT.Field3 AS [Parent.Field3],
	   ChildTable1ID AS [FirstChildTable.ChildTable1ID], CT1.Field1 AS [FirstChildTable.Field1], CT1.Field2 AS [FirstChildTable.Field2], CT1.Field3 AS [FirstChildTable.Field3],
	   CT2.ChildTable2ID AS [SecondChildTable.ChildTable1ID], CT2.Field1 AS [SecondChildTable.Field1], CT2.Field2 AS [SecondChildTable.Field2], CT2.Field3 AS [SecondChildTable.Field3],
	   GT.GrandchildTableID AS [GrandchildTable.GrandchildTableID], GT.Field1 AS [GrandchildTable.Field1], CT2.Field2 AS [GrandchildTable.Field2], CT2.Field3 AS [GrandchildTable.Field3]
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON PATH, ROOT('MyTestSQLJSONObject')

//Example of first 25 lines below:

{
    "MyTestSQLJSONObject": [{
        "Parent": {
            "ParentTableID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "FirstChildTable": {
            "ChildTable1ID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "SecondChildTable": {
            "ChildTable1ID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "GrandchildTable": {
            "GrandchildTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        ...

Example 3: NULL Field Values

One thing worth bearing in mind when working with the FOR JSON clause is how NULL field values behave. Take a look at the following example query output from dbo.ParentTable:

When attempting to query this single record using the FOR JSON AUTO clause, we get the following output:

//Example output below. Notice that no field name/value is returned for Field2 now

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field3": "This is a test record"
}]

If you have a requirement always to return a value for every NULL field, then you can use the INCLUDE_NULL_VALUES option to get around this:

SELECT *
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO, INCLUDE_NULL_VALUES
//Notice now that Field2 returns as expected

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": null,
    "Field3": "This is a test record"
}]

Even with this option specified, there may still be issues with outputting this field with a value of null. In these scenarios, on a T-SQL side, you would generally use the ISNULL function to replace NULL values with an empty string. Further, because the field type in this example is a CHAR(10) data type, there are ten characters of whitespace that need removing from the output string. The following query will fix both of these problems:

SELECT ParentTableID, Field1, LTRIM(ISNULL(Field2, '')) AS Field2, Field3
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO

Example 4: Using sub-queries to return child records as JSON objects

In most cases involving parent and child records, returning all the data as JSON may not be required. Instead, it may be necessary to return the fields from the parent record only, and all child records as a single JSON object field on the parent record. Using Subqueries, we can accommodate this scenario as follows

SELECT PT.ParentTableID, PT.Field1, PT.Field2, PT.Field3,
(
	SELECT *
	FROM dbo.ChildTable1 AS CT1
	WHERE CT1.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable1,
(
	SELECT *
	FROM dbo.ChildTable2 AS CT2
	 INNER JOIN dbo.GrandchildTable AS GT
	  ON CT2.ChildTable2ID = GT.ChildTable2ID
	WHERE CT2.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable2
FROM dbo.ParentTable AS PT

Example 5: Storing FOR JSON Query Output in Parameters

In most scenarios, you will generally provide functions or Stored Procedures for developers to interface with when connecting to the database. It is in this situation where the ability to store the output of any query – including those that use the FOR JSON clause – within a parameter becomes very useful. The following snippet will store the output of a FOR JSON query into a parameter called @JSON, which is then retrievable at any time via a SELECT query:

DECLARE @JSON NVARCHAR(MAX) = (SELECT * FROM dbo.ChildTable2 AS CT2 INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON AUTO)

SELECT @JSON

Wrapping Up: A Few Things to Bear in Mind

  • The FOR JSON clause is not compatible with Common Table Expressions (CTE’s).
  • When storing the output of a FOR JSON query in a parameter, you have no native capability to query the inner contents of the JSON object via a SELECT…WHERE query.
  • Because there is no way of specifying the name of the single column that returns as part of a FOR JSON query, you cannot create a view that uses this clause.

I hope that this post has given you all the information and guidance needed to start working with FOR JSON clauses in your queries. The ability to straightforwardly convert SQL tabular data into a format that most developers would readily scream for is a convenient feature to have at our disposal and is indicative of how far the SQL Server product has moved forward in recent years. Hopefully, this feature will save you some hassle in the future and will mean that you can make friends with your developer colleagues 🙂