Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:
The triggers do have the following limitations:
- It does not work for on-premises SQL Server
- Table must have an IDENTITY column for the new row trigger
- Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger
A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements - contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on :)
Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:
Accepting this fact, there may be certain situations when this is not the best option to implement:
- The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
- The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
- If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
- Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.
An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access - one which contains only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design ;)
Let’s see how the above example would work in practice, using the following example table:
In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!
The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:
The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.
The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:
For those unfamiliar with how triggers work, the inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!
With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:
Or a contained database user account (this would be my recommended option):
From there, the world is your oyster - you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:
- If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
- In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
- If working with Azure SQL, you will need to ensure that you have ticked the Allow access to Azure services options on the Firewall rules page of your server.
Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.