When considering whether or not to shift your existing SQL workloads to a single database offering on Azure SQL, one of the major pros is the breadth of capabilities the service can offer when compared with other vendors or in comparison to SQL Server on an Azure Virtual Machine. A list of these may include:

  • High feature parity with the latest on-premise SQL Server offering.
  • Built-in support for Enterprise product features, such as Transparent Database Encryption.
  • Security management features, such as firewalls and (optional) integration with Azure SQL Database Threat Detection for proactive monitoring.
  • Ability to quickly scale a database from a 2GB database with low CPU consumption to a mammoth 4TB database, with a significant pool of CPU/memory resources to match.

It is the last one of these that makes Azure SQL database a particularly good fit for web application deployments that have unpredictable user loads at the time of deployment or, as we have seen previously on the blog, when you are wanting to deploy out a LOB reporting database that houses Dynamics 365 Customer Engagement instance data. Administrators can very straightforwardly scale or downscale a database at any time within the portal or, if you are feeling particularly clever, you can look to implement automatic scaling based on Database Throughput Unit (DTU) consumption. This can aid towards making your query execution times as speedy as possible.

Database scaling, I have found, is very straightforward to get your head around and works like a charm for the most part…except, of course, when you get rather cryptic error messages like the one demonstrated below:

I got this error recently when attempting to scale an S0 5GB database down to Basic 2GB tier. To cut a long story short, I had temporarily scaled up the database to give me increased DTU capacity for a particularly intensive query, and wanted to scale it back to its original pricing tier. You can perhaps understand my confusion about why this error was occurring. After further research and escalation to Microsoft, it turns out that the database was still consuming unused disk space on the platform, thereby violating any size limits imposed by moving to a lower price tier. To resolve the issue, there are some tasks that need to be performed on the database to get it into a “downscale-ready state”. These consist of a series of T-SQL scripts, which I would caution against using if the database is currently in use, due to potential performance impacts. If you have found yourself in the same boat as me and are happy to proceed, the steps involved are as follows:

  1. To begin with, the script below will execute the DBCC SHRINKDATABASE command against the database, setting the database file max size to the value specified on the @DesiredFileSize parameter. The script is compiled so as to perform the shrinking in “chunks” based on the value of the @ShrinkChunkSize parameter, which may be useful in managing DTU consumption:
SET NOCOUNT ON

DECLARE @CurrentFileSize INT, @DesiredFileSize INT, @ShrinkChunkSize INT, @ActualSizeMB INT,
		@ErrorIndication INT, @dbFileID INT = 1, @LastSize INT, @SqlCMD NVARCHAR(MAX),
		@msg NVARCHAR(100)

/*Set these values for the current operation, size is in MB*/
SET @DesiredFileSize = 2000  /* filesize is in MB */
SET @ShrinkChunkSize = 50 /* chunk size is in MB */

SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid = @dbFileID

SELECT @ActualSizeMB = (SUM(total_pages) / 128) FROM sys.allocation_units

SET @msg = 'Current File Size: ' + CAST(@CurrentFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET  @msg = 'Actual used Size: ' + CAST(@ActualSizeMB AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Desired File Size: ' + CAST(@DesiredFileSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @msg = 'Interation shrink size: ' + CAST(@ShrinkChunkSize AS VARCHAR(10)) + 'MB'
RAISERROR(@msg,0,0) WITH NOWAIT

SET @ErrorIndication = CASE
							WHEN @DesiredFileSize > @CurrentFileSize THEN 1
							WHEN @ActualSizeMB > @DesiredFileSize THEN 2
							ELSE 0 END

IF @ErrorIndication = 1  
	RAISERROR('[Error] Desired size bigger than current size',0,0) WITH NOWAIT
IF @ErrorIndication = 2  
	RAISERROR('[Error] Actual size is bigger then desired size',0,0) WITH NOWAIT
IF @ErrorIndication = 0 
	RAISERROR('Desired Size check - OK',0,0) WITH NOWAIT

SET @LastSize = @CurrentFileSize + 1

WHILE @CurrentFileSize > @DesiredFileSize /*check if we got the desired size*/ AND @LastSize>@CurrentFileSize /* check if there is progress*/ AND @ErrorIndication=0
BEGIN
	SET @msg = CAST(GETDATE() AS VARCHAR(100)) + ' - Iteration starting'
	RAISERROR(@msg,0,0) WITH NOWAIT
	SELECT @LastSize = size/128 FROM sysfiles WHERE fileid = @dbFileID
	SET @sqlCMD = 'DBCC SHRINKFILE('+ CAST(@dbFileID AS VARCHAR(7)) + ',' + CAST(@CurrentFileSize-@ShrinkChunkSize AS VARCHAR(7)) + ') WITH NO_INFOMSGS;'
	EXEC (@sqlCMD)
	SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid  =@dbFileID
	SET @msg = CAST(getdate() AS VARCHAR(100)) + ' - Iteration completed. current size is: ' + CAST(@CurrentFileSize AS VARCHAR(10))
	RAISERROR(@msg,0,0) WITH NOWAIT
END
PRINT 'Done' 
  1. With the database successfully shrunk, verify that the size of the database does not exceed your target @DesiredFileSize value by running the following query:
SELECT * FROM sys.database_files
 
SELECT (SUM(reserved_page_count) * 8192) / 1024 / 1024 AS DbSizeInMB
FROM    sys.dm_db_partition_stats
  1. Although by this stage, the database file sizes should be underneath 2GB, the maximum size of the database is still set to match the pricing tier level. To fix this, execute the following script, substituting the name of your database where appropriate:
ALTER DATABASE MyDatabase MODIFY (MAXSIZE=2GB) 

 You can confirm that this command has been executed successfully by then running the following query and reviewing the output:

SELECT CAST(DATABASEPROPERTYEX ('MyDatabase', 'MaxSizeInBytes') AS FLOAT)/1024.00/1024.00/1024.00 AS 'DB Size in GB'
  1. With the above commands executed, you are now in a position to scale down your database without issue. There are a few ways this can be done but, as you likely already have SQL Server Management Studio or similar open to run the above queries, you can modify the tier of your database via this handy script:
--Scaling down to Basic is easy, as there is only one Max Size/Service Level Objective
--Therefore, just specify Edition

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Basic');

--For other tiers, specify the size of the DB.
--In this example, we are scaling down from Premium P1 1TB to Standard S2 250GB tier

ALTER DATABASE MyDatabase MODIFY (EDITION = 'Standard', MAXSIZE = 250 GB, SERVICE_OBJECTIVE = 'S2');

Although the script will likely execute immediately and indicate as such in any output, the actual scaling operation on the backend Azure platform can take some time to complete – usually about 5-10 minutes for lower sized databases.

Whilst I was relieved that a workaround was available to get the database scaled down correctly, it would have been useful if the above error message was signposted better or if there was some kind of online support article that detailed that this could be a potential issue when moving a database between various pricing/sizing tiers. Hopefully, by sharing the above steps, others who in the same boat can very quickly diagnose and resolve the issue without hammering your credit card with increased database usage charges in the process. 🙂

Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on 🙂

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which contains only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design 😉

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, the inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked the Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.

The world of database security and protection can be a difficult path to tread at times. I often find myself having to adopt a “tin-foil hat” approach, obsessing over the smallest potential vulnerability that a database could be compromised with. This thought process can be considered easy compared with any protective steps that need to be implemented in practice, as these can often prove to be mind-bogglingly convoluted. This is one of the reasons why I like working with Microsoft Azure and features such as Azure SQL Database Firewall Rules. They present a familiar means of securing your databases to specific IP address endpoints and are not inordinately complex in how they need to be approached; just provide a name, Start/End IP range and hey presto! Your client/ application can communicate with your database. The nicest thing about them is that the feature is enabled by default, meaning you don’t have to worry about designing and implementing a solution to restrict your database from unauthorised access at the outset.

As alluded to above, Database Firewall Rules are added via T-SQL code (unlike Server Rules, which can be specified via the Azure portal), using syntax that most SQL developers should feel comfortable using. If you traditionally prefer to design and build your databases using a Visual Studio SQL Database project, however, you may encounter a problem when looking to add a Database Firewall rule to your project. There is no dedicated template item that can be used to add this to the database. In this eventuality, you would have to look at setting up a Post-Deployment Script or Pre-Deployment Script to handle the creation of any requisite rules you require. Yet this can present the following problems:

  • Visual Studio will be unable to provide you with the basic syntax to create the rules.
  • Related to the above, Intellisense support will be limited, so you may struggle to identify errors in your code until it is deployed.
  • When deploying changes out to your database, the project will be unable to successfully detect (and remove) any rules that are deleted from your project.

The last one could prove to be particularly cumbersome if you are tightly managing the security of your Azure SQL database. Putting aside the obvious risk of someone forgetting to remove a rule as part of a deployment process, you would then have to manually remove the rules by connecting to your database and executing the following T-SQL statement:

EXECUTE sp_delete_database_firewall_rule 'MyDBFirewallRule'

Not the end of the world by any stretch, but if you are using Visual Studio as your deployment method for managing changes to your database, then having to do this step seems a little counter-intuitive. Fortunately, with a bit of creative thinking and utilisation of more complex T-SQL functionality, we can get around the issue by developing a script that carries out the following steps in order:

  • Retrieve a list of all current Database Firewall Rules.
  • Iterate through the list of rules and remove them all from the database.
  • Proceed to re-create the required Database Firewall Rules from scratch

The second step involves the use of a T-SQL function that I have traditionally steered away from using – Cursors. This is not because they are bad in any way but because a) I have previously struggled to understand how they work and b) have never found a good scenario in which they could be used in. The best way of understanding them is to put on your C# hat for a few moments and consider the following code snippet:

string[] array = new string[] { "Test1", "Test2", "Test3" }; 

foreach(string s in array)
    {
        Console.WriteLine(s);
    }
    

To summarise how the above works, we take our collection of values – Test1, Test2 and Test3 – and carry out a particular action against each; in this case, print out their value into the console. This, in a nutshell, is how Cursors work, and you have a great deal of versatility on what action you take during each iteration of the “loop”.

With a clear understanding of how Cursors work. the below script that accomplishes the aims set out above should hopefully be a lot clearer:

DECLARE @FirewallRule NVARCHAR(128)

DECLARE REMOVEFWRULES_CURSOR CURSOR
	LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT DISTINCT [name]
FROM sys.database_firewall_rules

OPEN REMOVEFWRULES_CURSOR
FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
WHILE @@FETCH_STATUS = 0
BEGIN
	EXECUTE sp_delete_database_firewall_rule @FirewallRule
	PRINT 'Firewall rule ' + @FirewallRule + ' has been successfully deleted.'
	FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
END
CLOSE REMOVEFWRULES_CURSOR
DEALLOCATE REMOVEFWRULES_CURSOR

GO

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule1',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule2',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';
		

To integrate as part of your existing database project, add a new Post-Deployment Script file and modify the above to reflect your requirements. As the name indicates, the script will run after all other aspects of your solution deployment has been completed. Now, the key caveat to bear in mind with this solution is that, during deployment, there will be a brief period of time where all Database Firewall Rules are removed from the database. This could potentially prevent any current database connections from dropping or failing to connect altogether. You should take care when using the above code snippet within a production environment and I would recommend you look at an alternative solution if your application/system cannot tolerate even a second of downtime.

When you have spent any length of time working with Dynamics CRM Online/Dynamics 365 for Enterprise (D365E) data programmatically, you become accustomed to how Option Set, State and Status Reason values are presented to you in code. To explain, the application does not store your Option Set value display names within the SQL Server Entity tables; rather, the Option Set Value that has been specified alongside your Label is what is stored as an integer value. That is why you are always mandatorily prompted to provide both values within the application:

The following benefits are realised as a result of how this is setup:

That being said, when working with these field types in code, you do always have to have the application window open or a list of all Labels/Values to hand so that you don’t get too confused… 🙂

I have previously extolled the virtues of the Data Export Service on the blog, and why you should consider it if you have basic integration requirements for your CRM/D365E deployment. One area in which it differs from other products on the market is how it handles the field types discussed above. For example, when exporting data to a SQL database via Scribe Online, new columns are created alongside that contain the “Display Name” (i.e. label value) that correspond to each Option, Status and Status Reason Label. So by running the following query against a Scribe export database:

SELECT DISTINCT statecode, statecode_displayname
FROM dbo.account

We get the best of both worlds – our underlying statecode value and their display names – all in 2 lines of code:

This is a big help, particularly when you are then using the data as part of a report, as no additional transformation steps are required and your underlying SQL query can be kept as compact as possible.

The Data Export Service differs from the above in an understandable way, as display name values for Status, Status Reason and Option Set column values are instead segregated out into their own separate table objects in your Azure SQL database:

OptionSetMetadata

GlobalOptionSetMetadata

StateMetadata

StatusMetadata

Why understandable? If you consider how the application can support multiple languages, then you realise that this can also apply to metadata objects across the application – such as field names, view names and – wouldn’t you have guessed it – Labels too. So when we inspect the OptionSetMetadata table, we can see that the table structure accommodates the storing of labels in multiple languages via the LocalizedLabelLanguageCode field:

Unlike the Scribe Online route above (which I assume only retrieves the Labels that correspond to the user account that authenticates with CRM/D365E), the Data Export Service becomes instantly more desirable if you are required to build multi-language reports referencing CRM/D365E application data.

The issue that you have to reconcile yourself with is that your SQL queries, if being expressed as natively as possible, instantly become a whole lot more complex. For example, to achieve the same results as the query above, it would have to be adapted as follows for the Data Export Service:

SELECT DISTINCT statecode, LocalizedLabel
FROM dbo.account
 LEFT JOIN dbo.StateMetadata
  ON 'account' = EntityName
  AND statecode = [State]
  AND '1033' = LocalizedLabelLanguageCode

The above is a very basic example, but if your query is complex – and involves multiple Option Set Values – then you would have to resort to using Common Table Expressions (CTE’s) to accommodate each potential JOIN required to get the information you want.

In these moments, we can look at some of the wider functionality provided as part of SQL Server to develop a solution that will keep things as simple as possible and, in this particular instance, a user-defined function is an excellent candidate to consider. These enable you to perform complex operations against the database platform and encapsulate them within very simply expressed objects that can also accept parameters. The good thing about functions is that they can be used to return table objects and scalar (i.e. single) objects.

Using a scalar function, we can, therefore, remove some of the complexity behind returning Option Set, Status and Status Reason labels by creating a function that returns the correct label, based on input parameters received by the function. You could look at creating a “master” function that, based on the input parameters, queries the correct Metadata table for the information you need; but in this example, we are going to look at creating a function for each type of field – Status, Status Reason, Option Set and Global Option Set.

To do this, connect up to your Data Export Service database and open up a new query window, ensuring that the context is set to the correct database. Paste the following code in the window and then hit Execute:

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

--Create Function to return Global Option Set Labels

CREATE FUNCTION [dbo].[fnGetGlobalOptionSetLabel]
(
	@GlobalOptionSetName NVARCHAR(64), --The logical name of the Global Option Set
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Global Option Set field ' + @GlobalOptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Option Set Labels

CREATE FUNCTION [dbo].[fnGetOptionSetLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Option Set field
	@OptionSetName NVARCHAR(64), --The logical name of the Option Set field
	@Option INT, --The option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain label for Option Set field ' + @OptionSetName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Labels

CREATE FUNCTION [dbo].[fnGetStateLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status field
	@State INT, --The Status option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain State label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

--Create Function to return Status Reason Labels

CREATE FUNCTION [dbo].[fnGetStatusLabel]
(
	@EntityName NVARCHAR(64), --The Entity logical name that contains the Status Reason field
	@Status INT, --The Status Reason option value to retrieve
	@LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html
)
RETURNS NVARCHAR(256)
AS
BEGIN

	DECLARE @Label NVARCHAR(256);
	DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	IF @RecordCount = 1
		SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode);
	ELSE
		SET @Label = CAST('An error has occurred. Could not obtain Status label for entity ' + @EntityName AS INT);
	RETURN @Label;

END

GO

This will then go off and create the functions listed in code, which should then show up under the Programmability folder on your SQL database:

For those who are unsure at what the SQL code is doing, it first attempts to determine if only 1 Label can be found for your appropriate field type, based on the parameters provided. If it is successful, then a value is returned; otherwise, the CAST function is designed to force an error to return back to the caller to indicate that none or more than 1 Option Set value was found. In most cases, this would indicate a typo in the parameters you have specified.

As with anything, the best way to see how something works is in the practice! So if we again look at our previous examples shown in this post, we would utilise the dbo.fnGetStateLabel function as follows to return the correct label in English:

SELECT DISTINCT statecode, dbo.fnGetStateLabel('account', statecode, 1033) AS statecode_displayname
FROM dbo.account

With our results returning as follows:

Now we can expose this through our reports and not worry about having to do any kind of transformation/lookup table to get around the issue 🙂

Attempting to keep things as simple as possible by encapsulating complex functionality into simply and clearly expressed functions is an excellent way of ensuring that code can be kept as streamlined as possible, and also ensures that other colleagues can accomplish complex tasks, even if they do not have in-depth knowledge of Transact-SQL.

Although CRM Online/Dynamics 365 for Enterprise (D365E) does provide a plethora of different tools aimed at satisfying reporting requirements for users of the application, you are restricted in how data can be queried within the application. For example, you cannot just connect straight up to the applications SQL database and start writing stored procedures that perform complex data transformations or joins. Traditionally, to achieve this, you would need to look at one of the several tools in the marketplace that enable you to export your data out into a format that best pleases you; or even take the plunge and get a developer to write your own application that satisfies your integration requirements.

With the recent D365E release and in-line with Microsoft’s longstanding approach to how they approach customer data within their applications (i.e. “It’s yours! So just do what you want with it!), the parallel introduction of the Data Export Service last year further consolidates this approach and adds an arguably game-changing tool to the products arsenal. By using the service, relatively straightforward integration requirements can be satisfied in a pinch and a lot of the headache involved in setting up a backup of your organisation’s databases/LOB reporting application can be eliminated. Perhaps the most surprising and crucial aspect of all of this is that using this tool is not going to break the bank too much either.

In this week’s blog post, I’m going to take a closer look at just what the Data Export Service is, the setup involved and the overall experience of using the service from end-to-end.

What is the Data Export Service?

The Data Export Service is a new, free*, add-on for your CRM/D365E subscription, designed to accomplish basic integration requirements. Microsoft perhaps provides the best summary of what the tool is and what it can achieve via TechNet :

The Data Export Service intelligently synchronizes the entire Dynamics 365 data initially and thereafter synchronizes on a continuous basis as changes occur (delta changes) in the Microsoft Dynamics 365 (online) system. This helps enable several analytics and reporting scenarios on top of Dynamics 365 data with Azure data and analytics services and opens up new possibilities for customers and partners to build custom solutions.

The tool is compatible with versions 8.0, 8.1 and 8.2 of the application, which corresponds the following releases of CRM Online/D365E:

  • Dynamics CRM Online 2016
  • Dynamics CRM Online 2016 Update 1
  • Dynamics 365 December Update

*You will still need to pay for all required services in Azure, but the add-on itself is free to download.

The Installation Process

Getting everything configured for the Data Export Service can prove to be the most challenging – and potentially alienating – part of the entire process. For this, you will need the following at your disposal:

  • An active Azure Subscription.
  • An Azure SQL Server configured with a single database or an Azure VM running SQL Server. Microsoft recommends a Premium P1 database or better if you are using an Azure SQL database, but I have been able to get the service working without any issue on S0 tier databases. This is an important point to make, given the cost difference per month can amount to hundreds of £’s.
  • An Azure Key Vault. This is what will securely store the credentials for your DB.
  • PowerShell and access to the Azure Resource Manager (AzureRM) Cmdlets. Powershell can be installed as an OS feature on Windows based platforms, and can now be downloaded onto OS X/Linux as well. PowerShell is required to create an Azure Key Vault, although you can also use it to create your Azure SQL Server instance/Windows VM with SQL Server.

It is therefore recommended that you have at least some experience in how to use Azure – such as creating Resource Groups, deploying individual resources, how the interface works etc. – before you start setting up the Data Export Service. Failing this, you will have to kindly ask your nearest Azure whizz for assistance 🙂 Fortunately, if you know what you’re doing, you can get all of the above setup very quickly; in some cases, less than 10 minutes if you opt to script out the entire deployment via PowerShell.

For your setup with D365E, all is required is the installation of the approved solution via the Dynamics 365 Administration Centre. Highlight the instance that you wish to deploy to and click on the pen icon next to Solutions:

Then click on the Solution with the name Data Export Service for Dynamics 365 and click the Install button. The installation process will take a couple of minutes, so keep refreshing the screen until the Status is updated to Installed. Then, within the Settings area of the application, you can access the service via the Data Export icon:

Because the Data Export Service is required to automatically sign into an external provider, you may also need to verify that your Web Browser pop-up settings/firewall is configured to allow the https://discovery.crmreplication.azure.net/ URL. Otherwise, you are likely to encounter a blank screen when attempting to access the Data Export Service for the first time. You will know everything is working correctly when you are greeted with a screen similar to the below:

Setting up an Export Profile

After accepting the disclaimer and clicking on the New icon, you will be greeted with a wizard-like form, enabling you to specify the following:

  • Mandatory settings required, such as the Export Profile Name and the URL to your Key Vault credentials.
  • Optional settings, such as which database schema to use, any object prefix that you would like to use, retry settings and whether you want to log when records are deleted.
  • The Entities you wish to use with the Export Service. Note that, although most system entities will be pre-enabled to use this service, you will likely need to go into Customizations and enable any additional entities you wish to utilise with the service via the Change Tracking option:

  • Any Relationships that you want to include as part of the sync: To clarify, this is basically asking if you wish to include any default many-to-many (N:N) intersect tables as part of your export profile. The list of available options for this will depend on which entities you have chosen to sync. For example, if you select the AccountLead and Product entities, then the following intersect tables will be available for synchronisation:

Once you have configured your profile and saved it, the service will then attempt to start the import process.

The Syncing Experience A.K.A Why Delta Syncing is Awesome

When the service first starts to sync, one thing to point out is that it may initially return a result of Partial Success and show that it has failed for multiple entities. In most cases, this will be due to the fact that certain entities dependent records have not been synced across (for example, any Opportunity record that references the Account name Test Company ABC Ltd. will not sync until this Account record has been exported successfully). So rather than attempting to interrogate the error logs straightaway, I would suggest holding off a while. As you may also expect, the first sync will take some time to complete, depending on the number of records involved. My experience, however, suggests it is somewhat quick – for example, just under 1 million records takes around 3 hours to sync. I anticipate that the fact that the service is essentially an Azure to Azure export no doubt helps in ensuring a smooth data transit.

Following on from the above, syncs will then take place as and when entity data is modified within the application. The delay between this appears to be very small indeed – often tens of minutes, if not minutes itself. This, therefore, makes the Data Export Service an excellent candidate for a backup/primary reporting database to satisfy any requirements that cannot be achieved via FetchXML alone.

One small bug I have observed is with how the application deals with the listmember intersect entity. You may get an errors thrown back that indicate records failed to sync across successfully, which is not the case upon closer inspection. Hopefully, this is something that may get ironed out and is due to the rather strange way that the listmember entity appears to behave when interacting with it via the SDK.

Conclusions or Wot I Think

For a free add-on service, I have been incredibly impressed by the Data Export Service and what it can do. For those who have previously had to fork out big bucks for services such as Scribe Online or KingswaySoft in the past to achieve very basic replication/reporting requirements within CRM/D365E, the Data Export Service offers an inexpensive way of replacing these services. That’s not to say that the service should be your first destination if your integration requirements are complex – for example, integrating Dynamics 365 with SAP/Oracle ERP systems. In these cases, the names mentioned above will no doubt be the best services to look at to achieve your requirements in a simplistic way. I also have a few concerns that the setup involved as part of the Data Export Service could be a barrier towards its adoption. As mentioned above, experience with Azure is a mandatory requirement to even begin contemplating getting setup with the tool. And your organisation may also need to reconcile itself with utilising Azure SQL databases or SQL Server instances on Azure VM’s. Hopefully, as time goes on, we may start to see the setup process simplified – so, for example, seeing the Export Profile Wizard actually go off and create all the required resources in Azure by simply entering your Azure login credentials.

The D365E release has brought a lot of great new functionality and features to the table, that has been oft requested and adds real benefit to organisations who already or plan to use the application in the future. The Data Export Service is perhaps one of the great underdog features that D365E brings to the table, and is one that you should definitely consider using if you want a relatively smooth sailing data export experience.