Slight change of pace with this week’s blog post, which will be a fairly condensed and self-indulgent affair – due to personal circumstances, I have been waylaid somewhat when it comes to producing content for the blog and I have also been unable to make any further progress with my new YouTube video series. Hoping that normal service will resume shortly, meaning additional videos and more content-rich blog posts, so stay tuned.

I’ve been running the CRM Chap blog for just over 2 years now. Over this time, I have been humbled and proud to have received numerous visitors to the site, some of whom have been kind enough to provide feedback or to share some of their Dynamics CRM/365 predicaments with me. Having reached such a landmark now seems to be good a time as any to take a look back on the posts that have received the most attention and to, potentially, give those who missed them the opportunity to read them. In descending order, here is the list of the most viewed posts to date on the crmchap.co.uk website:

  1. Utilising SQL Server Stored Procedures with Power BI
  2. Installing Dynamics CRM 2016 SP1 On-Premise
  3. Power BI Deep Dive: Using the Web API to Query Dynamics CRM/365 for Enterprise
  4. Utilising Pre/Post Entity Images in a Dynamics CRM Plugin
  5. Modifying System/Custom Views FetchXML Query in Dynamics CRM
  6. Grant Send on Behalf Permissions for Shared Mailbox (Exchange Online)
  7. Getting Started with Portal Theming (ADXStudio/CRM Portals)
  8. Microsoft Dynamics 365 Data Export Service Review
  9. What’s New in the Dynamics 365 Developer Toolkit
  10. Implementing Tracing in your CRM Plug-ins

I suppose it is a testament to the blog’s stated purpose that posts covering areas not exclusive to Dynamics CRM/365 rank so highly on the list and, indeed, represents how this application is so deeply intertwined with other technology areas within the Microsoft “stack”.

To all new and long-standing followers of the blog, thank you for your continued support and appreciation for the content šŸ™‚

If you are looking at automating the execution of SQL Server Integration Services .dtsx packages, then there are a few options at your disposal. The recommended and most streamlined route is to utilise the SSIDB catalog and deploy your packages to the catalog from within Visual Studio. This gives you additional flexibility if, when working with SQL Server 2016 or greater, on whether to deploy out single or multiple packages together. An alternative approach is to deploy your packages to the file system and then configure an Agent Job on SQL Server to execute the job based on a schedule and with runtime settings specified. This is as simple as selecting the appropriate dropdown option on the Agent Job Step configuration screen and setting theĀ Package source value toĀ File system:

Deploying out in this manner is useful if you are restricted from setting up the SSISDB catalog on your target SQL Server instance or if you are attempting to deploy packages to a separate Active Directory domain (I have encountered this problem previously, much to my chagrin). You also have the benefit of utilising other features available via the SQL Server Agent, such as e-mail notifications on fail/success of a job or conditional processing logic for job step(s). The in-built scheduling tool is also pretty powerful, enabling you to fine tune your automated package execution to any itinerary you could possibly conjure up.

I encountered a strange issue recently with a DTSX package configured via the SQL Agent. Quite randomly, the package suddenly started failing each time it was scheduled for execution, with the following error generated in the log files:

Failed to decrypt an encrypted XML node because the password was not specified or not correct. Package load will attempt to continue without the encrypted information.

The issue was a bit of a head-scratcher, with myself and a colleague trying the following steps in an attempt to fix the issue:

  • Forcing the package to execute manually generated the same error – this one was a bit of a longshot but worth trying anyway šŸ™‚
  • When executing the package from within Visual Studio, no error was encountered and the package executed successfully.
  • After replacing the package on the target server with the package just executed on Visual Studio (same version) and manually executing it, once again the same error was thrown.

In the end, the issue was resolved by deleting the Agent Job and creating it again from scratch. Now, if you are diagnosing the same issue and are looking to perform these same steps, it may be best to use theĀ Script Job asĀ option within SQL Server Management Studio to save yourself from any potential headache when re-building your Job’s profile:

Then, for good measure, perform a test execution of the Job via theĀ Start Job at Step… option to verify everything works.

I am still stumped at just what exactly went wrong here, but it is good to know that an adapted version of the ancient IT advice of yore can be referred back to…

Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on šŸ™‚

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against theĀ MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which containsĀ only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design šŸ˜‰

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, theĀ inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked theĀ Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.

Perhaps one of the most fiendish aspects of working with SQL Server Integration Services (SSIS) is the inevitable data transformation/conversion issues that get thrown up, even as part of relatively simplistic Extract, Transform & Load (ETL) packages. It doesn’t help as well if, having come from a strictly T-SQL focused background, you are then having to familiarise yourself with the differently named data types that SSIS has in comparison to SQL Server. Ultimately, whether you are still a noobie or season veteran in creating .dtsx packages, you should never be disheartened if you find yourself having to tackle data conversion issues during package development – put another way, there is always going to be a new system or data file format that comes out of nowhere to test your patience šŸ™‚

I had a rather strange occurrence of this issue recently when working to import Globally Unique Identifier (GUID) data into SQL Server’s equivalent data type – the uniqueidentifier. GUIDs are very much the first choice these days if you are building large-scale applications requiring unique values to distinguish database records. Whereas back in the old days, you could get away with an integer column using the IDENTITY seed, the potential for current datasets to contain billions or more records make this option less practical compared with GUID’s – a data type that is almost always certainly going to be unique, even you are generating them at an insane pace, and which has the headroom to accommodate huge datasets.

Going back to strange occurrence I mentioned above – perhaps the best way to explain the issue (and its resolution) is to show the steps involved. To do this, access to a SQL Server database instance, interfaced with via SQL Server Management Studio (SSMS), is required. Once this has been obtained, a database needs to be created and the following example script executed against it to create the table used during this post:

CREATE TABLE [GUIDImportTest]
(
	[UID] UNIQUEIDENTIFIER NOT NULL,
	[TestCol1] NVARCHAR(MAX) NULL,
	[TestCol2] NVARCHAR(MAX) NULL
)

We then also have our test import file, saved as a .csv file:

With both of these ready, we can then get the error to generate using theĀ SQL Server Import and Export Wizard – a handy tool that enables you to straightforwardly move uncomplex data between applications and file formats. This tool can be accessed via SSMS by right-clicking on any database and selectingĀ Tasks ->Ā Import Data…

Begin the wizard as indicated above and, when specifying theĀ Data SourceĀ settings, selectĀ Flat File Source. In theĀ Advanced tab, you should also override the default data type settings for the UID field and set it toĀ unique identifier (DT_GUID):

TheĀ Target destination (accessed further along the wizard) should be set toĀ SQL Server Native ClientĀ and to the server/database where the table created above resides.

On theĀ Select Source Tables and Views screen, be sure that the correct table on the Destination drop-down. By default, if your import source does not match the destination name, then the wizard will assume you want to create a brand new table:

On theĀ Review Data Type Mapping tab, a data conversion warning be will flagged up for the twoĀ TestColĀ fields; these can be safely disregarded, as the import package will successfully convert these values for you without further complaint:

After clickingĀ Next and letting the package, we can then see the titular error of this post occur, which halts the package execution:

Initially, I thought the error was generating because the GUID values in the .csv file were not in upper case (when selecting uniqueidentifierĀ data via a SQL query, this is always returned in this format), but the same error is thrown when importing data in this exact format. It turns out the issue was down to something that I should have readily realised based on my experience working with Dynamics CRM/Dynamics 365 for Customer Engagement. When working with URL’s and query string parameters in the application involving individual records,Ā GUID values require special URL encoding to convert curly brace values – { and } respectively – into “URL friendly” format. So for example, the following:

{06e82887-9afc-4064-abad-f6fb60b8a1f3}

Is converted into:

%7B06e82887-9afc-4064-abad-f6fb60b8a1f3%7D

What does this have to do with SSIS and the task at hand? Well, it turns out that when importing uniqueidentifier data types into the application, the application expects the data to be in the above format, surrounded by curly braces. Our source data, therefore, needs to resemble the following image below to import successfully:

After making the appropriate changes to the source data, the package will then execute successfully, loading the data into the desired SQL table:

I guess the lesson here is that never take for granted any knowledge you may have garnered from a particular sourceĀ  – even when dealing with what may be at first glance a completely disparate challenge. In all likelihood, it just might be that this past experience could present a means of thinking differently about a problem and, ultimately, overcome the challenge you are faced with.

The biggest headache when managing any database system is enforcing data quality and consistency across the entire dataset. This can range from ensuring that field values are entered correctly through to enforcing rules to prevent duplicate records from even touching the periphery of your database. If you are using an application like CRM Online/Dynamics 365 for Enterprise (D365E), then the built-in functionality within these systems can assist with this, via the use of Option Sets for field values and handy features such as Duplicate Detection Rules. If you have developed your own system, then all due thought should be directed towards ensuring that your system can adequality enforce data quality at the client side, wherever possible. Doing this early on can save you a lot of work down the line.

If the backend database system you are using is SQL Server, then, fortunately, the SQL standard (and, specifically, Transact-SQL or T-SQL, Microsoft’s implementation of the standard) has some tricks up its sleeve that is worth noting. CONSTRAINT‘s areĀ database objects that can be setup to enforce…well, constraints on the values that are allowed within a particular field. There are a number of differentĀ CONSTRAINT‘s that are at our disposal, but the most commonly used ones are:

  • PRIMARY KEY CONSTRAINT: Typically used to indicate a column that will always contain a unique value in the database, to facilitate working with a specific row of data in a table. Any field type can be setup as a PRIMARY KEY, but it is generally recommended to use an Integer (INT) or Globally Unique Identifier field (UNIQUEIDENTIFIER), more generally referred to asĀ a GUID.
  • FOREIGN KEY CONSTRAINT: FOREIGN KEY‘s are used to indicate a field within a database that is a reference to another table within the database. Values entered into this field must match against a record within the related table and you can configure a wide range of options of how “parent” record behaves, based on how the related record is modified or removed in the database. Those coming from a CRM/D365E background can grasp this better by realising that these are essentially lookup fields as part of a one-to-many (1:N) relationship.
  • DEFAULT CONSTRAINT: On some occasions, when a value is not entered into a column, you may need to ensure thatĀ something is put into the field. This is particularly the case when you are working with NOT NULL fields, which always require a value. A DEFAULT CONSTRAINT gets around this issue by allowing you to specify an initial value for the column, should the database operation against the column results in a value not being specified as part of an INSERT statement.

W3 Schools have a handy list that covers all possible CONSTRAINT’s within SQL, but be sure to cross reference this with the relevant Microsoft documentation! As an implementation of the standard, T-SQL can have a few nice – but sometimes surprising – differences that you should be aware of šŸ™‚ The upside of all of this is that, if you have the need to ensure that your database column values are protected against erroneous values, then a CHECK CONSTRAINT is your first port of call. What’s even better is that these are something that can be setup rather straightforwardly to ensure, for example, a field is only allowed to conform to a specific set of values.

A practical illustration is the best way to demonstrate the ease – and potential pitfall – you may hit when working with CHECK CONSTRAINT‘s containing a large list of potential values. Let’s say you want to create a table with a field –Ā TestField – that should only ever accept the values A, B or C. Your CREATE TABLE script would look something like this:

CREATE TABLE [dbo].[Test]
(
	[TestField] CHAR(1)	NULL,
	CONSTRAINT CHK_TestField CHECK ([TestField] IN ('A', 'B', 'C'))
)

This works well within the following situations:

  • You are working with a handful of values that need to checked – ideally no more than a dozen.
  • You can guarantee that the list of values will not be subject to frequent changes.

If your situationĀ fits within the opposite end of the parameters specified above, you may make the assumption that the best way to build a sustainable solution is via a dedicated lookup table within your database. The idea being with this is the list of values required for the CONSTRAINT can be managed in bulk, updated/removed via common T-SQL statements and alsoĀ prevents you from managing particularly long-winded table scripts within your database. The following script will create a lookup table that records the fields you requireĀ CONSTRAINT‘s forĀ (assuming this is the case; this can be removed at your discretion) and the values that need checking:

CREATE TABLE [dbo].[lkp_Test] 
(
	[lkpTestID]	INT	IDENTITY(1,1) NOT NULL,
	CONSTRAINT PK_lkp_Test_lkpTestID PRIMARY KEY CLUSTERED ([lkpTestID]),
	[FieldName] VARCHAR(100)	NOT NULL,
	[Value]		VARCHAR(100)	NOT NULL
)

Those who have good, but not extensive, experience with T-SQL may make the next step assumption that we can then modify our CHECK constraint to directly query the table, similar to the below:

--To recreate an existing Constraint, it has to be dropped and recreated

ALTER TABLE [dbo].[Test]
DROP CONSTRAINT CHK_TestField;

GO

ALTER TABLE [dbo].[Test]
ADD CONSTRAINT CHK_TestField CHECK ([TestField] IN (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = 'TestField' AND [TestField] = [Value]));

GO

After executing the command, your next reaction may be confusion as an error is thrown back to you:

What this means is that there is no way that we can define a query within our CONSTRAINT to essentially “lookup” the values that are allowed from our table and enforce only the list of approved values. The workaround for this is that we can look at utilising a user-defined function, or UDF. The function will perform the necessary query against our lookup table but achieves the requirements of returning a single (i.e. scalar) value, which we can then filter against in our CONSTRAINT. Functions are a topic that has been covered previously on the blog, and, when used effectively, can help to encapsulate complicated functionality within simple objects that can be referenced via your T-SQL queries. Below is a function that can be setup to achieve the requisite conditions for our scenario:

CREATE FUNCTION dbo.fnGetTestConstraintValues(@FieldName VARCHAR(100), @Value VARCHAR(100))
RETURNS VARCHAR(5)
AS
BEGIN
	IF EXISTS (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = @FieldName AND [Value] = @Value)
RETURN 'TRUE'
	IF @Value IS NULL
RETURN 'TRUE'
RETURN 'FALSE'
END

GO

Let’s break down what this is doing in more detail:

  • When called, you must supply two parameters with values – the name of the field that needs to be checked against (@FieldName) and the value to check (@Value). We’ll see how this works in practice in a few moments.
  • The function will always return a single value – eitherĀ TRUEĀ orĀ FALSE – therebyĀ ensuring we have a scalar value to interact with.
  • Because ourĀ TestField has been defined as NULL, we have to add some additional logic to handle these occurrences. SQL Server will not enforce a CHECK constraint for NULL values when entered/updated into a database, but our function does not extend this far. Therefore, unless a record is inserted into ourĀ lkp_Test table for our field with a NULL value, these value types will force the CONSTRAINT to be enforced. By adding in an IF condition to check for NULL‘s and return a value of ‘TRUE‘ in these instances, NULL values can be entered into the database successfully.

With this function now in place, our CONSTRAINT just needs to be modified to call the function, verifying that TRUE returns; if not, then the field value will not be allowed:

ADD CONSTRAINT CHK_TestField CHECK (dbo.fnGetTestConstraintValues('TestField', [TestField]) = 'TRUE');

Now we can ensure that we are able to specify a subquery within our CONSTRAINT within a supported manner – excellent! šŸ™‚Ā CONSTRAINT‘s are a really useful tool at the disposal of any database administrator/developer, and something that you should always have at the front of your mind as part of any design. Doing so will ensure that you are building a solution which, as much as realistically conceivable, tries to keep the data within your database as squeaky clean as possible.