Software deployments and updates are always a painful event for me. This feeling hasn’t subsided over time, even though pretty much every development project I am involved with these days utilises Azure DevOps Release Pipelines to automate this whole process for our team. The key thing to always stress around automation is that it does not mean that all of your software deployments suddenly become entirely successful, just because you have removed the human error aspect from the equation. In most cases, all you have done is reduce the number of times that you will have to stick your nose in to figure out what’s gone wrong šŸ™‚

Database upgrades, which are done typically via a Data-tier Application Package (DACPAC) deployment, can be the most nerve-racking of all. Careful consideration needs putting towards the types of settings you define as part of your publish profile XML, as otherwise, you may find either a) specific database changes are blocked entirely, due to dependency issues or because intended data loss will occur or b) the types of changes you make could result in unintended data loss. This last one is a particularly salient concern and one which can be understood most fully by implementing staging or pre-production environments for your business systems. Despite some of the thought that requires factoring in before you can look to take advantage of DACPAC deployments, they do represent the natural option of choice in managing your database upgrades more formally, mainly when there is need to manage deployments into Azure SQL databases. This state of play is mostly thanks to the dedicated task that handles this all for us within Azure DevOps:

What this feature doesn’t make available to us are any appropriate steps we may need to take to generate a snapshot of the database before the deployment begins, a phase which represents both an equally desirable and necessary business requirement for software deployments. Now, I should point out that Azure SQL includes many built-in options around recovery and point in time restore options. These options are pretty extensive and enable you, depending on the database size tier you have opted for, to restore your database to any single time point over a 30-day point. The question that therefore arises from this is fairly obvious – why go to the extra bother (and cost) to create a separate database backup? Consider the following:

  • The recovery time for a point-in-time restore can vary greatly, depending on several factors relating to your database size, current pricing tier and any transactions that may be running on the database itself. In situations where a short release window constraints you and your release must satisfy a strict success/fail condition, having to go through the restore process after a database upgrade could lead to your application from being down longer then is mandated within your organisation. Having a previous version of the database already available there means you can very quickly update your application connection strings to ensure the system returns to operational use if required.
  • Having a replica copy of the database available directly after an upgrade can be incredibly useful if you need to reference data within the old version of the database post-upgrade. For example, a column may have been removed from one table and added to another, with the need to copy across all of this data accordingly. Although a point-in-time restore can be done to expose this information out, having a backup of the old version of the database available straight after the upgrade can help in expediting this work.
  • Although Microsoft promise and provide an SLA with point-in-time restore, sometimes its always best to err on the side of caution. šŸ™‚ By taking a snapshot of the database yourself, you have full control over its hosting location and the peace of mind in knowing that the database is instantly accessible in case of an issue further down the line.

If any of the above conditions apply, then you can look to generate a copy of your database before any DACPAC deployment takes place via the use of an Azure PowerShell script task. The example script below shows how to achieve this requirement, which is designed to mirror a specific business process; namely, that a readily accessible database backup will generate before any upgrade is taken place and to create a copy of this within the same Azure SQL Server instance, but with the current date value appended onto it. When a new deployment triggers in future, the script will delete the previously backed up database:

#Define parameters for the Azure SQL Server name, resource group and target database

$servername = 'mysqlservername'
$rg = 'myresourcegroup'
$db = 'mydb'

#Get any previous backed up databases and remove these from the SQL Server instance

$sqldbs = Get-AzureRmSqlDatabase -ResourceGroupName $rg -ServerName $servername | select DatabaseName | Where-Object {$_.DatabaseName -like $db + '_Backup*'}

if (($sqldbs |  Measure-Object).Count)
{
	Remove-AzureRmSqlDatabase -ResourceGroup $rg -ServerName $servername -DatabaseName $sqldbs[0].DatabaseName
}

#Get the current date and convert it into a string, with format DD_MM_YYYY

$date = Get-Date
$date = $date.ToShortDateString()
$date = $date -replace "/", "_"

#Create the name of the new database

$copydbname = $db + '_Backup_' + $date

#Actually create the copy of the database

New-AzureRmSqlDatabaseCopy -CopyDatabaseName $copydbname -DatabaseName $db -ResourceGroupName $rg -ServerName $servername -CopyServerName $servername

Simply add this on as a pipeline task before any database deployment task, connect up to your Azure subscription and away you go!

Backups are an unchanging aspect of any piece of significant IT delivery work and one which cloud service providers, such as Microsoft, have proactively tried to implement as part of their Platform-as-a-Service (PaaS) product lines. Azure SQL is not any different in this regard and, you could argue that the point-in-time restore options listed above provide sufficient assurance in the event of a software deployment failure or a disaster-recovery scenario, therefore meaning that no extra steps are necessary to protect yourself. Consider your particular needs carefully when looking to implement a solution like the one described in this post as, although it does afford you the ability to recover quickly from any failed software deployment, it does introduce additional complexity into your deployment procedures, overall technical architecture and – perhaps most importantly – cost.

Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST,Ā GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With theĀ name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. šŸ™‚

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

When considering whether or not to shift your existing SQL workloads to a single database offering on Azure SQL, one of the major pros is the breadth of capabilities the service can offer when compared with other vendors or in comparison to SQL Server on an Azure Virtual Machine. A list of these may include:

  • High feature parity with the latest on-premise SQL Server offering.
  • Built-in support for Enterprise product features, such as Transparent Database Encryption.
  • Security management features, such as firewalls and (optional) integration with Azure SQL Database Threat Detection for proactive monitoring.
  • Ability to quickly scale a database from a 2GB database with low CPU consumption to a mammoth 4TB database, with a significant pool of CPU/memory resources to match.

It is the last one of these that makes Azure SQL database a particularly good fit for web application deployments that have unpredictable user loads at the time of deployment or, as we have seen previously on the blog, when you are wanting to deploy out a LOB reporting database that houses Dynamics 365 Customer Engagement instance data. Administrators can very straightforwardly scale or downscale a database at any time within the portal or, if you are feeling particularly clever, you can look to implement automatic scaling based on Database Throughput Unit (DTU) consumption. This can aid towards making your query execution times as speedy as possible.

Database scaling, I have found, is very straightforward to get your head around and works like a charm for the most part…except, of course, when you get rather cryptic error messages like the one demonstrated below:

I got this error recently when attempting to scale an S0 5GB database down to Basic 2GB tier. To cut a long story short, I had temporarily scaled up the database to give me increased DTU capacity for a particularly intensive query, and wanted to scale it back to its original pricing tier. You can perhaps understand my confusion about why this error was occurring. After further research and escalation to Microsoft, it turns out that the database wasĀ still consuming unused disk space on the platform, thereby violating any size limits imposed by moving to a lower price tier. To resolve the issue, there are some tasks that need to be performed on the database to get it into a “downscale-ready state”. These consist of a series of T-SQL scripts, which I would caution against usingĀ if the database is currently in use, due to potential performance impacts. If you have found yourself in the same boat as me and are happy to proceed, the steps involved are as follows:

  1. To begin with, the script below will execute the DBCC SHRINKDATABASE command against the database, setting the database file max size to the value specified on the @DesiredFileSize parameter. The script is compiled so as to perform the shrinking in “chunks” based on the value of the @ShrinkChunkSize parameter, which may be useful in managing DTU consumption:
SET NOCOUNT ON

DECLARE @CurrentFileSize INT, @DesiredFileSize INT, @ShrinkChunkSize INT, @ActualSizeMB INT,
		@ErrorIndication INT, @dbFileID INT = 1, @LastSize INT, @SqlCMD NVARCHAR(MAX),
		@msg NVARCHAR(100)

/*Set these values for the current operation, size is in MB*/
SET @DesiredFileSize = 2000  /* filesize is in MB */
SET @ShrinkChunkSize = 50 /* chunk size is in MB */

SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid = @dbFileID

SELECT @ActualSizeMB = (SUM(total_pages) / 128) FROM sys.allocation_units

SET @msg = 'Current File Size: ' + CAST(@CurrentFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET  @msg = 'Actual used Size: ' + CAST(@ActualSizeMB AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Desired File Size: ' + CAST(@DesiredFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Interation shrink size: ' + CAST(@ShrinkChunkSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @ErrorIndication = CASE
							WHEN @DesiredFileSize > @CurrentFileSize THEN 1
							WHEN @ActualSizeMB > @DesiredFileSize THEN 2
							ELSE 0 END

IF @ErrorIndication = 1  
	RAISERROR('[Error] Desired size bigger than current size',0,0) WITH NOWAIT
IF @ErrorIndication = 2  
	RAISERROR('[Error] Actual size is bigger then desired size',0,0) WITH NOWAIT
IF @ErrorIndication = 0 
	RAISERROR('Desired Size check - OK',0,0) WITH NOWAIT

SET @LastSize = @CurrentFileSize + 1

WHILE @CurrentFileSize > @DesiredFileSize /*check if we got the desired size*/ AND @LastSize>@CurrentFileSize /* check if there is progress*/ AND @ErrorIndication=0
BEGIN
	SET @msg = CAST(GETDATE() AS VARCHAR(100)) + ' - Iteration starting'
	RAISERROR(@msg,0,0) WITH NOWAIT
	SELECT @LastSize = size/128 FROM sysfiles WHERE fileid = @dbFileID
	SET @sqlCMD = 'DBCC SHRINKFILE('+ CAST(@dbFileID AS VARCHAR(7)) + ',' + CAST(@CurrentFileSize-@ShrinkChunkSize AS VARCHAR(7)) + ') WITH NO_INFOMSGS;'
	EXEC (@sqlCMD)
	SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid  =@dbFileID
	SET @msg = CAST(getdate() AS VARCHAR(100)) + ' - Iteration completed. current size is: ' + CAST(@CurrentFileSize AS VARCHAR(10))
	RAISERROR(@msg,0,0) WITH NOWAIT
END
PRINT 'Done' 
  1. With the database successfully shrunk, verify that the size of the database does not exceed your targetĀ @DesiredFileSize value by running the following query:
SELECT * FROM sys.database_files
 
SELECT (SUM(reserved_page_count) * 8192) / 1024 / 1024 AS DbSizeInMB
FROM    sys.dm_db_partition_stats
  1. Although by this stage, the database file sizes should be underneath 2GB, the maximum size of the database is still set to match the pricing tier level. To fix this, execute the following script, substituting the name of your database where appropriate:
ALTER DATABASE MyDatabase MODIFY (MAXSIZE=2GB) 

Ā You can confirm that this command has been executed successfully by then running the following query and reviewing the output:

SELECT CAST(DATABASEPROPERTYEX ('MyDatabase', 'MaxSizeInBytes') AS FLOAT)/1024.00/1024.00/1024.00 AS 'DB Size in GB'
  1. With the above commands executed, you are now in a position to scale down your database without issue. There are a few ways this can be done but, as you likely already have SQL Server Management Studio or similar open to run the above queries, you can modify the tier of your database via this handy script:
--Scaling down to Basic is easy, as there is only one Max Size/Service Level Objective
--Therefore, just specify Edition

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Basic');

--For other tiers, specify the size of the DB.
--In this example, we are scaling down from Premium P1 1TB to Standard S2 250GB tier

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Standard', MAXSIZE = 250 GB, SERVICE_OBJECTIVE = 'S2');

Although the script will likely execute immediately and indicate as such in any output, the actual scaling operation on the backend Azure platform can take some time to complete – usually about 5-10 minutes for lower sized databases.

Whilst I was relieved that a workaround was available to get the database scaled down correctly, it would have been useful if the above error message was signposted better or if there was some kind of online support article that detailed that this could be a potential issue when moving a database between various pricing/sizing tiers. Hopefully, by sharing the above steps, others who in the same boat can very quickly diagnose and resolve the issue without hammering your credit card with increased database usage charges in the process. šŸ™‚

If you are looking for an easy-to-use and highly expandable mail relay service, SendGrid represents the most developer-friendly solution out in the market today. What’s even better is that it’s available on Azure, making it the ideal choice if you are developing an existing solution on the Azure stack. The best thing I like about the service is the extensive documentation covering every aspect of its Web API, structured to provide a clear explanation of endpoint methods, required properties, and example outputs – exactly the right way that all technical documentation should be laid out.

I recently had a requirement to integrate with the SendGrid API to extrapolate email statistic information into a SQL database. My initial thoughts were that I would need to resort to a bespoke C# solution to achieve these requirements. However, keenly remembering my commitment this year to find opportunities to utilise the service more, I decided to investigate whether Microsoft Flow could streamline this process. Suffice to say, I was pleasantly surprised and what I wanted to do as part of this week’s blog post was demonstrate how I was able to take advantage of Microsoft Flow to deliver my requirements. In the process, I hope to get you thinking about how you approach integration requirements in the future, challenging some of the pre-conceptions around this.

Before we get into creating the Flow itself…

…you will need to create a table within your SQL database to store the requisite data. This script should do the trick:

CREATE TABLE [dbo].[SendGridStatistic]
(
	[SendGridStatisticUID] [uniqueidentifier] NULL DEFAULT NEWID(),
	[Date] DATE NOT NULL,
	[CategoryName] VARCHAR(50) NULL,
	[Blocks] FLOAT NOT NULL,
	[BounceDrops] FLOAT NOT NULL,
	[Bounces] FLOAT NOT NULL,
	[Clicks] FLOAT NOT NULL,
	[Deferred] FLOAT NOT NULL,
	[Delivered] FLOAT NOT NULL,
	[InvalidEmail] FLOAT NOT NULL,
	[Opens] FLOAT NOT NULL,
	[Processed] FLOAT NOT NULL,
	[SpamReportDrops] FLOAT NOT NULL,
	[SpamReports] FLOAT NOT NULL,
	[UniqueClicks] FLOAT NOT NULL,
	[UniqueOpens] FLOAT NOT NULL,
	[UnsubscribeDrops] FLOAT NOT NULL,
	[Unsubscribes] FLOAT NOT NULL
)

A few things to point out with the above:

  • The CategoryNameĀ field is only required if you are wishing to return statistic information grouped by category from the API. The example that follows primarily covers this scenario, but I will also demonstrate how to return consolidated statistic information as well if you wanted to exclude this column.
  • Microsoft Flow will only be able to map the individual statistic count values to FLOAT fields. If you attempt to use an INT, BIGINT etc. data type, then the option to map these fields will not appear. Kind of annoying, given that FLOATs are effectively “dirty”, imprecise numbers, but given the fact we are not working with decimal numbers, this shouldn’t cause any real problems.
  • TheĀ SendGridStatisticUIDĀ is technically optional and could be replaced by an INT/IDENTITY seed instead or removed entirely. Remember though that it is always good practice to have a unique column value for each table, to aid in individual record operations.

In addition, you will also need to ensure you have generated an API key for SendGrid that has sufficient privileges to access the Statistic information for your account.

With everything ready, we can now “flow” quite nicely into building things out. The screenshot below demonstrates how the completed Flow should look from start to finish. The sections that follow will discuss what is required for each constituent element

Recurrence

The major boon when working with Flow is the diverse options you have for triggering them – either based on certain conditions within an application or just simply based off a recurring schedule. For this example, as we will be extracting statistic information for an entire 24 period, you should ensure that the Flow executes at least once daily. The precise timing of this is up to you, but for this example, I have suggested 2 AM local time each day. The configured recurrence settings should resemble the below if done correctly:

You should be aware that when your Flow is first activated, it will execute straightaway, regardless of what settings you have configured above.

HTTP

As the SendGrid Web API is an HTTP endpoint, we can utilise the built-in HTTP connector to retrieve the information we need. This is done via a GET operation, with authentication achieved via a Raw header value containing the API key generated earlier. The tricky bit comes when building the URI and how we want the Flow to retrieve our information – namely, all statistic information covering the previous day. There is also the (optional) requirement of ensuring that statistic information is grouped by category when retrieved. Fortunately, we can get around this problem by using a bit of Expression trickery to build a dynamic URI value each time the Flow is executed. The expression code to use will depend on whether or not you require category grouping. I have provided both examples below, so simply choose the one that meets your specific requirement:

Retrieve Consolidated Statistics
concat('https://api.sendgrid.com/v3/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')))
Retrieve Statistics Grouped By Category
concat('https://api.sendgrid.com/v3/categories/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&categories=cat1&categories=cat2')

Note:Ā For this example, statistic information would be returned only for the categories that equalĀ cat1 &Ā cat2. These should be updated to suit your requirements, and you can add on additional categories by extending the URI value like so: &categories=cat3&categories=cat4 etc.

Your completed HTTP component should resemble the below if done correctly. Note in particular the requirement to haveĀ Bearer and a space before specifying your API key:

Parse JSON

A successful 200 response to the Web API endpoint will return a JSON object, listing all statistic information grouped by date (and category, if used). I always struggle when it comes to working with JSON – a symptom of working too long with relational databases I think – and they are always challenging for me when attempting to serialize result sets. Once again, Flow comes to the rescue by providing aĀ Parse JSONĀ component. This was introduced with what appears to be little fanfare last year, but really proves its capabilities in this scenario. The only bit you will need to worry about is providing a sample schema so that the service can properly interpret your data. TheĀ Use sample payload to generate schema option is the surest way of achieving this, and you can use the example payloads provided on the SendGrid website to facilitate this:

Retrieve Consolidated Statistics: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/global.html

Retrieve Statistics Grouped By Category: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/categories.html

An example screenshot is provided below in case you get stuck with this:

Getting the records into the database

Here’s where things get confusing…at least for me when I was building out this flow for the first time. When you attempt to add in anĀ Insert row step to the flow and specify your input from theĀ Parse JSON step, Microsoft Flow will automatically addĀ two Apply to each step to properly handle the input. I can understand why this is the case, given that we are working with a nested JSON response, but it does provide an ample opportunity to revisit an internet meme of old…

Just pretend Xzibit is Microsoft Flow…

With the above ready and primed, you can begin to populate yourĀ Insert row step. Your first step here will more than likely be to configure your database connection settings using the + Add New Connection option:

The nicest thing about this is that you can utilise the on-premise gateway service to connect to a non-cloud database if required. Usual rules apply, regardless of where your database is located – use a minimum privileged account, configure any required IP whitelisting etc.

With your connection configured, all that’s left is to provide the name of your table and then perform a field mapping exercise from the JSON response. If you are utilising the SendGridStatisticUIDĀ field, then this should be left blank to ensure that the default constraint kicks in correctly on the database side:

The Proof is in the Pudding: Testing your Flow

All that’s left now is to test your Flow. As highlighted earlier in the post, your Flow will automatically execute after being enabled, meaning that you will be in a position to determine very quickly if things are working or not. Assuming everything executes OK, you can verify that your database table resembles the below example:

This example output utilises the CategoryNameĀ value, which will result in multiple data rows for each date, depending on the number of categories you are working with. This is why theĀ SendGridStatisticUIDĀ is so handy for this scenario šŸ™‚

Conclusions or Wot I Think

When it came to delivering the requirements as set out in this posts introduction, I cannot overemphasise how much Microsoft Flow saved my bacon. My initial scoping exercise around this strongly led me towards having to develop a fully bespoke solution in code, with additional work than required to deploy this out to a dedicated environment for continuous execution. This would have surely led to:

  • Increased deployment time
  • Additional cost for the provisioning of a dedicated execution environment
  • Wasted time/effort due to bug-fixing or unforeseen errors
  • Long-term problems resulting from maintaining a custom code base and ensuring that other colleagues within the business could properly interpret the code correctly.

Instead, I was able to design, build and fully test the above solution in less than 2 hours, utilising a platform that has a wide array of documentation and online support and which, for our particular scenario, did not result in any additional cost. And this, I feel, best summarises the value that Microsoft Flow can bring to the table. It overturns many of the assumptions that you generally have to make when implementing complex integration requirements, allowing you to instead focus on delivering an effective solution quickly and neatly. And, for those who need a bit more punch due to very highly specific business requirements, then Azure Logic Apps act as the perfect meeting ground for both sides of the spectrum. The next time you find yourself staring down the abyss of a seemingly impossible integration requirement, take a look at what Microsoft Flow can offer. You just might surprise yourself.

Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on šŸ™‚

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against theĀ MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which containsĀ only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design šŸ˜‰

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, theĀ inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked theĀ Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.