Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

When it comes to handling large datasets in a formal, structured and highly scalable manner, nothing beats SQL Server. Having worked with the product for almost six years, I always look forward to the opportunity of putting together some SQL queries or to build out a new database. I think of it as a nice little treat, a reward amongst the midst of other, sometimes tedious, tasks that I have to deal with on a weekly basis. I rank knowledge of SQL Server pretty highly if you are aiming to become a serious Dynamics CRM/365 for Customer Engagement professional and I credit my experience with SQL as one of the things that helped to streamline my journey into Dynamics.

It may be, however, that others are not as keen at the prospect of working with SQL databases, particularly when it is unable to accommodate some of the alternative data storage mechanisms that are commonplace with application developers. A good example of this is JSON (JavaScript Object Notation), a format that is used widely today as a portable and “easy to read” mechanism of transferring data. For someone who is more used to working with SQL, getting your head around JSON can be a bit of a challenge at first and – traditionally – was not something that Microsoft factored into the design of SQL Server. A lot has changed with Microsoft – to the extent that services such as Microsoft Flow and Azure now use JSON extensively – and with SQL Server, as a whole host of related functions were added to SQL Server 2014 to provide straightforward conversions into JSON. The FOR JSON PATH clause is the primary gateway into this world and is a function which I have slowly, but surely, started to get my head around. What I wanted to do in this week’s blog post was provide a “crash course” on how to use this nifty piece of functionality, hopefully with the aim of giving you everything you need to start utilising it in your environment(s).

Before we begin…

To best illustrate how the clause works in practice, it is necessary to create an appropriate set of inter-related tables within SQL Server, that will be used for all examples that follow. Here’s one I (rather unimaginatively) made earlier:

The code snippets to create them can be found below:

CREATE TABLE dbo.[ParentTable]
(
	ParentTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[ChildTable1]
(
	ChildTable1ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

CREATE TABLE dbo.[ChildTable2]
(
	ChildTable2ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[GrandchildTable]
(
	GrandchildTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ChildTable2ID INT FOREIGN KEY REFERENCES dbo.[ChildTable2](ChildTable2ID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

The table structures are incredibly basic, but note, in particular, the FOREIGN KEY relationships from the 2 Child Tables to the Parent and also the additional parent/child relationship between the GrandchildTable and ChildTable2. You will also need to look at inserting some test data into the tables to properly follow through the rest of this post.

With our environment prepped, let’s take a look at the different ways we can convert our dataset into JSON format, with minimal effort involved.

Example 1: FOR JSON AUTO

If we were to look at doing a straightforward SELECT * query on all our tables, our query and expected output might look something like this:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID

Our main issue with this query is that, because of how T-SQL works, the 2 ParentTable records are returned for every child and grandchild record. For a client application, this can be somewhat cumbersome to handle. FOR JSON AUTO can be straightforwardly added to the end of the above query to convert the query output accordingly:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON AUTO

//Example output of the first 25 lines below:

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": "Test      ",
    "Field3": "This is a test record",
    "CT1": [{
        "ChildTable1ID": 1,
        "ParentTableID": 1,
        "Field1": true,
        "Field2": "Test      ",
        "Field3": "This is a test record",
        "CT2": [{
            "ChildTable2ID": 1,
            "ParentTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record",
            "GT": [{
                "GrandchildTableID": 1,
                "ChildTable2ID": 1,
                "Field1": false,
                "Field2": "Test      ",
                "Field3": "This is a test record"
            }]
       ...

This output provides a much more sensible structure, with no record duplication and proper nesting of child records.

Example 2: FOR JSON PATH, ROOT

With some modifications to the above query, it is also possible to specify names for each root element. This can be tailored depending on your specific requirements. For example, let’s say we had to provide the following root element names for each of the example tables:

  • dbo.ParentTable -> Parent
  • dbo.ChildTable1 -> FirstChildTable
  • dbo.ChildTable2 -> SecondChildTable

The following query would achieve these requirements, in addition to adding a master root element name of MyTestSQLJSONObject:

SELECT PT.ParentTableID AS [Parent.ParentTableID], PT.Field1 AS [Parent.Field1], PT.Field2 AS [Parent.Field2], PT.Field3 AS [Parent.Field3],
	   ChildTable1ID AS [FirstChildTable.ChildTable1ID], CT1.Field1 AS [FirstChildTable.Field1], CT1.Field2 AS [FirstChildTable.Field2], CT1.Field3 AS [FirstChildTable.Field3],
	   CT2.ChildTable2ID AS [SecondChildTable.ChildTable1ID], CT2.Field1 AS [SecondChildTable.Field1], CT2.Field2 AS [SecondChildTable.Field2], CT2.Field3 AS [SecondChildTable.Field3],
	   GT.GrandchildTableID AS [GrandchildTable.GrandchildTableID], GT.Field1 AS [GrandchildTable.Field1], CT2.Field2 AS [GrandchildTable.Field2], CT2.Field3 AS [GrandchildTable.Field3]
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON PATH, ROOT('MyTestSQLJSONObject')

//Example of first 25 lines below:

{
    "MyTestSQLJSONObject": [{
        "Parent": {
            "ParentTableID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "FirstChildTable": {
            "ChildTable1ID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "SecondChildTable": {
            "ChildTable1ID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "GrandchildTable": {
            "GrandchildTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        ...

Example 3: NULL Field Values

One thing worth bearing in mind when working with the FOR JSON clause is how NULL field values behave. Take a look at the following example query output from dbo.ParentTable:

When attempting to query this single record using the FOR JSON AUTO clause, we get the following output:

//Example output below. Notice that no field name/value is returned for Field2 now

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field3": "This is a test record"
}]

If you have a requirement always to return a value for every NULL field, then you can use the INCLUDE_NULL_VALUES option to get around this:

SELECT *
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO, INCLUDE_NULL_VALUES
//Notice now that Field2 returns as expected

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": null,
    "Field3": "This is a test record"
}]

Even with this option specified, there may still be issues with outputting this field with a value of null. In these scenarios, on a T-SQL side, you would generally use the ISNULL function to replace NULL values with an empty string. Further, because the field type in this example is a CHAR(10) data type, there are ten characters of whitespace that need removing from the output string. The following query will fix both of these problems:

SELECT ParentTableID, Field1, LTRIM(ISNULL(Field2, '')) AS Field2, Field3
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO

Example 4: Using sub-queries to return child records as JSON objects

In most cases involving parent and child records, returning all the data as JSON may not be required. Instead, it may be necessary to return the fields from the parent record only, and all child records as a single JSON object field on the parent record. Using Subqueries, we can accommodate this scenario as follows

SELECT PT.ParentTableID, PT.Field1, PT.Field2, PT.Field3,
(
	SELECT *
	FROM dbo.ChildTable1 AS CT1
	WHERE CT1.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable1,
(
	SELECT *
	FROM dbo.ChildTable2 AS CT2
	 INNER JOIN dbo.GrandchildTable AS GT
	  ON CT2.ChildTable2ID = GT.ChildTable2ID
	WHERE CT2.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable2
FROM dbo.ParentTable AS PT

Example 5: Storing FOR JSON Query Output in Parameters

In most scenarios, you will generally provide functions or Stored Procedures for developers to interface with when connecting to the database. It is in this situation where the ability to store the output of any query – including those that use the FOR JSON clause – within a parameter becomes very useful. The following snippet will store the output of a FOR JSON query into a parameter called @JSON, which is then retrievable at any time via a SELECT query:

DECLARE @JSON NVARCHAR(MAX) = (SELECT * FROM dbo.ChildTable2 AS CT2 INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON AUTO)

SELECT @JSON

Wrapping Up: A Few Things to Bear in Mind

  • The FOR JSON clause is not compatible with Common Table Expressions (CTE’s).
  • When storing the output of a FOR JSON query in a parameter, you have no native capability to query the inner contents of the JSON object via a SELECT…WHERE query.
  • Because there is no way of specifying the name of the single column that returns as part of a FOR JSON query, you cannot create a view that uses this clause.

I hope that this post has given you all the information and guidance needed to start working with FOR JSON clauses in your queries. The ability to straightforwardly convert SQL tabular data into a format that most developers would readily scream for is a convenient feature to have at our disposal and is indicative of how far the SQL Server product has moved forward in recent years. Hopefully, this feature will save you some hassle in the future and will mean that you can make friends with your developer colleagues 🙂

When considering whether or not to shift your existing SQL workloads to a single database offering on Azure SQL, one of the major pros is the breadth of capabilities the service can offer when compared with other vendors or in comparison to SQL Server on an Azure Virtual Machine. A list of these may include:

  • High feature parity with the latest on-premise SQL Server offering.
  • Built-in support for Enterprise product features, such as Transparent Database Encryption.
  • Security management features, such as firewalls and (optional) integration with Azure SQL Database Threat Detection for proactive monitoring.
  • Ability to quickly scale a database from a 2GB database with low CPU consumption to a mammoth 4TB database, with a significant pool of CPU/memory resources to match.

It is the last one of these that makes Azure SQL database a particularly good fit for web application deployments that have unpredictable user loads at the time of deployment or, as we have seen previously on the blog, when you are wanting to deploy out a LOB reporting database that houses Dynamics 365 Customer Engagement instance data. Administrators can very straightforwardly scale or downscale a database at any time within the portal or, if you are feeling particularly clever, you can look to implement automatic scaling based on Database Throughput Unit (DTU) consumption. This can aid towards making your query execution times as speedy as possible.

Database scaling, I have found, is very straightforward to get your head around and works like a charm for the most part…except, of course, when you get rather cryptic error messages like the one demonstrated below:

I got this error recently when attempting to scale an S0 5GB database down to Basic 2GB tier. To cut a long story short, I had temporarily scaled up the database to give me increased DTU capacity for a particularly intensive query, and wanted to scale it back to its original pricing tier. You can perhaps understand my confusion about why this error was occurring. After further research and escalation to Microsoft, it turns out that the database was still consuming unused disk space on the platform, thereby violating any size limits imposed by moving to a lower price tier. To resolve the issue, there are some tasks that need to be performed on the database to get it into a “downscale-ready state”. These consist of a series of T-SQL scripts, which I would caution against using if the database is currently in use, due to potential performance impacts. If you have found yourself in the same boat as me and are happy to proceed, the steps involved are as follows:

  1. To begin with, the script below will execute the DBCC SHRINKDATABASE command against the database, setting the database file max size to the value specified on the @DesiredFileSize parameter. The script is compiled so as to perform the shrinking in “chunks” based on the value of the @ShrinkChunkSize parameter, which may be useful in managing DTU consumption:
SET NOCOUNT ON

DECLARE @CurrentFileSize INT, @DesiredFileSize INT, @ShrinkChunkSize INT, @ActualSizeMB INT,
		@ErrorIndication INT, @dbFileID INT = 1, @LastSize INT, @SqlCMD NVARCHAR(MAX),
		@msg NVARCHAR(100)

/*Set these values for the current operation, size is in MB*/
SET @DesiredFileSize = 2000  /* filesize is in MB */
SET @ShrinkChunkSize = 50 /* chunk size is in MB */

SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid = @dbFileID

SELECT @ActualSizeMB = (SUM(total_pages) / 128) FROM sys.allocation_units

SET @msg = 'Current File Size: ' + CAST(@CurrentFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET  @msg = 'Actual used Size: ' + CAST(@ActualSizeMB AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Desired File Size: ' + CAST(@DesiredFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Interation shrink size: ' + CAST(@ShrinkChunkSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @ErrorIndication = CASE
							WHEN @DesiredFileSize > @CurrentFileSize THEN 1
							WHEN @ActualSizeMB > @DesiredFileSize THEN 2
							ELSE 0 END

IF @ErrorIndication = 1  
	RAISERROR('[Error] Desired size bigger than current size',0,0) WITH NOWAIT
IF @ErrorIndication = 2  
	RAISERROR('[Error] Actual size is bigger then desired size',0,0) WITH NOWAIT
IF @ErrorIndication = 0 
	RAISERROR('Desired Size check - OK',0,0) WITH NOWAIT

SET @LastSize = @CurrentFileSize + 1

WHILE @CurrentFileSize > @DesiredFileSize /*check if we got the desired size*/ AND @LastSize>@CurrentFileSize /* check if there is progress*/ AND @ErrorIndication=0
BEGIN
	SET @msg = CAST(GETDATE() AS VARCHAR(100)) + ' - Iteration starting'
	RAISERROR(@msg,0,0) WITH NOWAIT
	SELECT @LastSize = size/128 FROM sysfiles WHERE fileid = @dbFileID
	SET @sqlCMD = 'DBCC SHRINKFILE('+ CAST(@dbFileID AS VARCHAR(7)) + ',' + CAST(@CurrentFileSize-@ShrinkChunkSize AS VARCHAR(7)) + ') WITH NO_INFOMSGS;'
	EXEC (@sqlCMD)
	SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid  =@dbFileID
	SET @msg = CAST(getdate() AS VARCHAR(100)) + ' - Iteration completed. current size is: ' + CAST(@CurrentFileSize AS VARCHAR(10))
	RAISERROR(@msg,0,0) WITH NOWAIT
END
PRINT 'Done' 
  1. With the database successfully shrunk, verify that the size of the database does not exceed your target @DesiredFileSize value by running the following query:
SELECT * FROM sys.database_files
 
SELECT (SUM(reserved_page_count) * 8192) / 1024 / 1024 AS DbSizeInMB
FROM    sys.dm_db_partition_stats
  1. Although by this stage, the database file sizes should be underneath 2GB, the maximum size of the database is still set to match the pricing tier level. To fix this, execute the following script, substituting the name of your database where appropriate:
ALTER DATABASE MyDatabase MODIFY (MAXSIZE=2GB) 

 You can confirm that this command has been executed successfully by then running the following query and reviewing the output:

SELECT CAST(DATABASEPROPERTYEX ('MyDatabase', 'MaxSizeInBytes') AS FLOAT)/1024.00/1024.00/1024.00 AS 'DB Size in GB'
  1. With the above commands executed, you are now in a position to scale down your database without issue. There are a few ways this can be done but, as you likely already have SQL Server Management Studio or similar open to run the above queries, you can modify the tier of your database via this handy script:
--Scaling down to Basic is easy, as there is only one Max Size/Service Level Objective
--Therefore, just specify Edition

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Basic');

--For other tiers, specify the size of the DB.
--In this example, we are scaling down from Premium P1 1TB to Standard S2 250GB tier

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Standard', MAXSIZE = 250 GB, SERVICE_OBJECTIVE = 'S2');

Although the script will likely execute immediately and indicate as such in any output, the actual scaling operation on the backend Azure platform can take some time to complete – usually about 5-10 minutes for lower sized databases.

Whilst I was relieved that a workaround was available to get the database scaled down correctly, it would have been useful if the above error message was signposted better or if there was some kind of online support article that detailed that this could be a potential issue when moving a database between various pricing/sizing tiers. Hopefully, by sharing the above steps, others who in the same boat can very quickly diagnose and resolve the issue without hammering your credit card with increased database usage charges in the process. 🙂

Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on 🙂

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which contains only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design 😉

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, the inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked the Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.

The world of database security and protection can be a difficult path to tread at times. I often find myself having to adopt a “tin-foil hat” approach, obsessing over the smallest potential vulnerability that a database could be compromised with. This thought process can be considered easy compared with any protective steps that need to be implemented in practice, as these can often prove to be mind-bogglingly convoluted. This is one of the reasons why I like working with Microsoft Azure and features such as Azure SQL Database Firewall Rules. They present a familiar means of securing your databases to specific IP address endpoints and are not inordinately complex in how they need to be approached; just provide a name, Start/End IP range and hey presto! Your client/ application can communicate with your database. The nicest thing about them is that the feature is enabled by default, meaning you don’t have to worry about designing and implementing a solution to restrict your database from unauthorised access at the outset.

As alluded to above, Database Firewall Rules are added via T-SQL code (unlike Server Rules, which can be specified via the Azure portal), using syntax that most SQL developers should feel comfortable using. If you traditionally prefer to design and build your databases using a Visual Studio SQL Database project, however, you may encounter a problem when looking to add a Database Firewall rule to your project. There is no dedicated template item that can be used to add this to the database. In this eventuality, you would have to look at setting up a Post-Deployment Script or Pre-Deployment Script to handle the creation of any requisite rules you require. Yet this can present the following problems:

  • Visual Studio will be unable to provide you with the basic syntax to create the rules.
  • Related to the above, Intellisense support will be limited, so you may struggle to identify errors in your code until it is deployed.
  • When deploying changes out to your database, the project will be unable to successfully detect (and remove) any rules that are deleted from your project.

The last one could prove to be particularly cumbersome if you are tightly managing the security of your Azure SQL database. Putting aside the obvious risk of someone forgetting to remove a rule as part of a deployment process, you would then have to manually remove the rules by connecting to your database and executing the following T-SQL statement:

EXECUTE sp_delete_database_firewall_rule 'MyDBFirewallRule'

Not the end of the world by any stretch, but if you are using Visual Studio as your deployment method for managing changes to your database, then having to do this step seems a little counter-intuitive. Fortunately, with a bit of creative thinking and utilisation of more complex T-SQL functionality, we can get around the issue by developing a script that carries out the following steps in order:

  • Retrieve a list of all current Database Firewall Rules.
  • Iterate through the list of rules and remove them all from the database.
  • Proceed to re-create the required Database Firewall Rules from scratch

The second step involves the use of a T-SQL function that I have traditionally steered away from using – Cursors. This is not because they are bad in any way but because a) I have previously struggled to understand how they work and b) have never found a good scenario in which they could be used in. The best way of understanding them is to put on your C# hat for a few moments and consider the following code snippet:

string[] array = new string[] { "Test1", "Test2", "Test3" }; 

foreach(string s in array)
    {
        Console.WriteLine(s);
    }
    

To summarise how the above works, we take our collection of values – Test1, Test2 and Test3 – and carry out a particular action against each; in this case, print out their value into the console. This, in a nutshell, is how Cursors work, and you have a great deal of versatility on what action you take during each iteration of the “loop”.

With a clear understanding of how Cursors work. the below script that accomplishes the aims set out above should hopefully be a lot clearer:

DECLARE @FirewallRule NVARCHAR(128)

DECLARE REMOVEFWRULES_CURSOR CURSOR
	LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT DISTINCT [name]
FROM sys.database_firewall_rules

OPEN REMOVEFWRULES_CURSOR
FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
WHILE @@FETCH_STATUS = 0
BEGIN
	EXECUTE sp_delete_database_firewall_rule @FirewallRule
	PRINT 'Firewall rule ' + @FirewallRule + ' has been successfully deleted.'
	FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
END
CLOSE REMOVEFWRULES_CURSOR
DEALLOCATE REMOVEFWRULES_CURSOR

GO

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule1',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule2',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';
		

To integrate as part of your existing database project, add a new Post-Deployment Script file and modify the above to reflect your requirements. As the name indicates, the script will run after all other aspects of your solution deployment has been completed. Now, the key caveat to bear in mind with this solution is that, during deployment, there will be a brief period of time where all Database Firewall Rules are removed from the database. This could potentially prevent any current database connections from dropping or failing to connect altogether. You should take care when using the above code snippet within a production environment and I would recommend you look at an alternative solution if your application/system cannot tolerate even a second of downtime.