When working with Virtual Machines (VM’s) on Azure that have been deployed using a Linux Operating System (Ubuntu, Debian, Red Hat etc.), you would be forgiven for assuming that the experience would be rocky when compared with working with Windows VM’s. Far from it – you can expect an equivalent level of support for features such as full disk encryption, backup and administrator credentials management directly within the Azure portal, to name but a few. Granted, there may be some difference in the availability of certain extensions when compared with a Windows VM, but the experience on balance is comparable and makes the management of Linux VM’s a less onerous journey if your background is firmly rooted with Windows operating systems.

To ensure that the Azure platform can effectively do some of the tasks listed above, the Azure Linux Agent is deployed to all newly created VM’s. Written in Python and supporting a wide variety of common Linux distributable OS’s, I would recommend never removing this from your Linux VM; no matter how tempting this may be. The tradeoff could cause potentially debilitating consequences within Production environments. A good example of this would be if you ever needed to add an additional disk to your VM. This task would become impossible to achieve without the Agent present on your VM. As well as having the proper reverence for the Agent, it’s important to keep an eye on how the service is performing, even more so if you are using Recovery Services vaults to take snapshots of your VM and back up to another location. Otherwise, you may start to see errors like the one below being generated whenever a backup job attempts to complete:

It’s highly possible that many administrators are seeing this error at the time of writing this post (January 2018). The recent disclosures around Intel CPU vulnerabilities have prompted many cloud vendors, including Microsoft, to roll out emergency patches across their entire data centres to address the underlying vulnerability. Whilst it is commendable that cloud vendors have acted promptly to address this flaw, the pace of the work and the dizzying array of resources affected has doubtless led to some issues that could not have been foreseen from the outset. I believe, based on some of the available evidence (more on this later), that one of the unintended consequences of this work is the rise of the above error message with the Azure Linux Agent.

When attempting to deal with this error myself, there were a few different steps I had to try before the backup started working correctly. I have collected all of these below and, if you find yourself in the same boat, one or all of them should resolve the issue for you.

First, ensure that you are on the latest version of the Agent

This one probably goes without saying as it’s generally the most common answer to any error 🙂 However, the steps for deploying updates out onto a Linux machine can be more complex, particularly if your primary method of accessing the machine is via an SSH terminal. The process for updating your Agent depends on your Linux version – this article provides instructions for the most common variants. In addition, it is highly recommended that the auto-update feature is enabled, as described in the article.

Verify that the Agent service starts successfully.

It’s possible that the service is not running on your Linux VM. This can be confirmed by executing the following command:

service walinuxagent start

Be sure to have your administrator credentials handy, as you will need to authenticate to successfully start the service. You can then check that the service is running by executing the ps -e command.

Clear out the Agent cache files

The Agent stores a lot of XML cache files within the /var/lib/waagent/ folder, which can sometimes cause issues with the Agent. Microsoft has specifically recommended that the following command is executed on your machine after the 4th January if you are experiencing issues:

sudo rm -f /var/lib/waagent/*.[0-9]*.xml

The command will delete all “old” XML files within the above folder. A restart of the service should not be required. The date mentioned above links back to the theory suggested earlier in this post that the Intel chipset issues and this error message are linked in some way, as the dates seem to tie with when news first broke regarding the vulnerability.

If you are still having problems

Read through the entirety of this article, and try all of the steps suggested – including digging around in the log files, if required. If all else fails, open a support request directly with Microsoft.

Hopefully, by following the above steps, your backups are now working again without issue 🙂

Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:

Limitations

The triggers do have the following limitations:

  • It does not work for on-premises SQL Server
  • Table must have an IDENTITY column for the new row trigger
  • Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger

A slightly frustrating side to this is that Microsoft Flow doesn’t intuitively tell you when your table is incompatible with the requirements – contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting “RTFM!”, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on 🙂

Getting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:

ALTER TABLE dbo.MyTable
ADD	[FlowID] INT IDENTITY(1,1) NOT NULL,
	[RowVersion] ROWVERSION
	

Accepting this fact, there may be certain situations when this is not the best option to implement:

  • The database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects.
  • The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future.
  • If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge.
  • Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added.

An alternative approach to consider would be to configure a “gateway” table for Microsoft Flow to access – one which contains only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the “gateway” record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design 😉

Let’s see how the above example would work in practice, using the following example table:

CREATE TABLE [dbo].[SourceTable]
(
	[SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
	[SourceTableCol1] VARCHAR(50) NULL,
	[SourceTableCol2] VARCHAR(150) NULL,
	[SourceTableCol3] DATETIME NULL
)

In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be…well…uniquely identified!

The next step would be to create our “gateway” table. Based on the table script above, this would be built out via the following script:

CREATE TABLE [dbo].[SourceTableLog]
(
	[SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
	[SourceTableUID] UNIQUEIDENTIFIER NOT NULL,
	CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID])
		REFERENCES [dbo].[SourceTable] ([SourceTableUID])
		ON DELETE CASCADE,
	[TimeStamp] ROWVERSION
)

The use of a FOREIGN KEY here will help to ensure that the “gateway” table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.

The final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:

CREATE TRIGGER [trInsertNewSourceTableToLog]
ON [dbo].[SourceTable]
AFTER INSERT
AS
BEGIN
	INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID])
	SELECT [SourceTableUID]
	FROM inserted
END

For those unfamiliar with how triggers work, the inserted table is a special object exposed during runtime that allows you to access the values that have been…OK, let’s move on!

With all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:

CREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow]
	WITH DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

Or a contained database user account (this would be my recommended option):

CREATE USER [mydatabase-flow] WITH PASSWORD = 'P@ssw0rd1',
	DEFAULT_SCHEMA = dbo

GO

GRANT CONNECT TO [mydatabase-flow]

GO

GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow]

GO

From there, the world is your oyster – you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:

  • If you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead.
  • In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required.
  • If working with Azure SQL, you will need to ensure that you have ticked the Allow access to Azure services options on the Firewall rules page of your server.

Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.

When it comes to technology learning, it can often feel as if you are fighting against a constant wave of change, as studying is outpaced by the introduction of new technical innovations. Fighting the tide is often the most desirous outcome to work towards, but it can be understandable why individuals choose to specialise in a particular technology area. There is no doubt some comfort in becoming a subject matter expert and in not having to worry about “keeping up with the Joneses”. However, when working with an application such as Dynamics 365 for Customer Engagement (D365CE), I would argue it is almost impossible to ignore the wider context of what sit’s alongside the application, particularly Azure, Microsoft’s cloud as a service platform. Being able to understand how the application can be extended via external integrations is typically high on the list of any project requirements, and often these integrations require a light-touch Azure involvement, at a minimum. Therefore, the ability to say that you are confident in accomplishing certain key tasks within Azure instantly puts you ahead of others and in a position to support your business/clients more straightforwardly.

Here are 4 good reasons why you should start to familiarise yourself with Azure, if you haven’t done so already, or dedicate some additional time towards increasing your knowledge in an appropriate area:

Dynamics 365 for Customer Engagement is an Azure application

Well…we can perhaps not answer this definitively and say that 100% of D365CE is hosted on Azure (I did hear a rumour that some aspects of the infrastructure were hosted on AWS). Certainly, for instances that are provisioned within the UK, there is ample evidence to suggest this to be the case. What can be said with some degree of certainty is that D365CE is an Azure leveraged application. This is because it uses key aspects of the service to deliver various functionality within the application:

  • Azure Active Directory: Arguably the crux of D365CE is the security/identity aspect, all of which is powered using Microsoft’s cloud version of Active Directory.
  • Azure Key Vault: Encryption is enabled by default on all D365CE databases, and the management of encryption keys is provided via Azure Key Vault.
  • Office 365: Similar to D365CE, Office 365 is – technically – an Azure cloud service provided by Microsoft. As both Office 365 and D365CE often need to be tightly knitted together, via features such as Server-Side Synchronisation, Office 365 Groups and SharePoint document management, it can be considered a de facto part of the base application.

It’s fairly evident, therefore, that D365CE can be considered as a Software as a Service (SaaS) application hosted on Azure. But why is all this important? For the simple reason that, because as a D365CE professional, you will be supporting the full breadth of the application and all it entails, you are already an Azure professional by default. Not having a cursory understanding of Azure and what it can offer will immediately put you a detriment to others who do, and increasingly places you in a position where your D365CE expertise is severely blunted.

It proves to prospective employers that you are not just a one trick pony

When it comes to interviews for roles focused around D365CE, I’ve been at both sides of the table. What I’ve found separates a good D365CE CV from an excellent one all boils down to how effectively the candidate has been able to expand their knowledge into the other areas. How much additional knowledge of other applications, programming languages etc. does the candidate bring to the business? How effectively has the candidate moved out of their comfort zone in the past in exploring new technologies, either in their current roles or outside of work? More importantly, how much initiative and passion has the candidate shown in embracing changes? A candidate who is able to answer these questions positively and is able to attribute, for example, extensive knowledge of Azure will instantly move up in my estimation of their ability. On the flip side of this, I believe that interviews that have resulted in a job offer for me have been helped, in no small part, to the additional technical skills that I can make available to a prospective employer.

To get certain things done involving D365CE, Azure knowledge is a mandatory requirement

I’ve talked about one of these tasks before on the blog, namely, how to setup the Azure Data Export solution to automatically synchronise your application data to an Azure SQL Database. Unless you are in the fortunate position of having an Azure savvy colleague who can assist you with this, the only way you are going to be able to successfully complete this task is to know how to deploy an Azure SQL Server instance, a database for this instance and the process for setting up an Azure Key Vault. Having at least some familiarity with how to deploy simple resources in Azure and accomplish tasks via PowerShell script execution will place you in an excellent position to achieve the requirements of this task, and others such as:

The above is just a flavour of some of the things you can do with D365CE and Azure together, and there are doubtless many more I have missed 🙂 The key point I would highlight is that you should not just naively assume that D365CE is containerised away from Azure; in fact, often the clearest and cleanest way of achieving more complex business/technical requirements will require a detailed consideration of what can be built out within Azure.

There’s really no good reason not to, thanks to the wealth of resources available online for Azure training.

A sea change seems to be occurring currently at Microsoft with respect to online documentation/training resources. Previously, TechNet and MSDN would be your go-to resources to find out how something Microsoft related works. Now, the Microsoft Docs website is where you can find the vast majority of technical documentation. I really rate the new experience that Microsoft Docs provides, and there now seems to be a concerted effort to ensure that these articles are clear, easy to follow and include end-to-end steps on how to complete certain tasks. This is certainly the case for Azure and, with this in mind, I defy anyone to find a reasonable enough excuse not to begin reading through these articles. They are the quickest way towards expanding your knowledge within an area of Azure that interests you the most or to help prepare you to, for example, setup a new Azure SQL database from scratch.

For those who learn better via visual tools, Microsoft has also greatly expanded the number of online video courses available for Azure, that can be accessed for free. There are also some excellent, “deep-dive” topic areas that can also be used to help prepare you for Azure certification.

Conclusions or Wot I Think

I use the term “D365CE professional” a number of times throughout this post. This is a perhaps unhelpful label to ascribe to anyone working with D365CE today. A far better title is, I would argue, “Microsoft cloud professional”, as this gets to the heart of what I think anyone who considers themselves a D365CE “expert” should be. Building and supporting solutions within D365CE is by no means an isolated experience, as you might have argued a few years back. Rather, the onus is on ensuring that consultants, developers etc. are as multi-faceted as possible from a skillset perspective. I talked previously on the blog about becoming a swiss army knife in D365CE. Whilst this is still a noble and recommended goal, I believe casting the net wider can offer a number of benefits not just for yourself, but for the businesses and clients you work with every day. It puts you centre-forward in being able to offer the latest opportunities to implement solutions that can increase efficiency, reduce costs and deliver positive end-user experiences. And, perhaps most importantly, it means you can confidently and accurately attest to your wide-ranging expertise in any given situation.

The world of database security and protection can be a difficult path to tread at times. I often find myself having to adopt a “tin-foil hat” approach, obsessing over the smallest potential vulnerability that a database could be compromised with. This thought process can be considered easy compared with any protective steps that need to be implemented in practice, as these can often prove to be mind-bogglingly convoluted. This is one of the reasons why I like working with Microsoft Azure and features such as Azure SQL Database Firewall Rules. They present a familiar means of securing your databases to specific IP address endpoints and are not inordinately complex in how they need to be approached; just provide a name, Start/End IP range and hey presto! Your client/ application can communicate with your database. The nicest thing about them is that the feature is enabled by default, meaning you don’t have to worry about designing and implementing a solution to restrict your database from unauthorised access at the outset.

As alluded to above, Database Firewall Rules are added via T-SQL code (unlike Server Rules, which can be specified via the Azure portal), using syntax that most SQL developers should feel comfortable using. If you traditionally prefer to design and build your databases using a Visual Studio SQL Database project, however, you may encounter a problem when looking to add a Database Firewall rule to your project. There is no dedicated template item that can be used to add this to the database. In this eventuality, you would have to look at setting up a Post-Deployment Script or Pre-Deployment Script to handle the creation of any requisite rules you require. Yet this can present the following problems:

  • Visual Studio will be unable to provide you with the basic syntax to create the rules.
  • Related to the above, Intellisense support will be limited, so you may struggle to identify errors in your code until it is deployed.
  • When deploying changes out to your database, the project will be unable to successfully detect (and remove) any rules that are deleted from your project.

The last one could prove to be particularly cumbersome if you are tightly managing the security of your Azure SQL database. Putting aside the obvious risk of someone forgetting to remove a rule as part of a deployment process, you would then have to manually remove the rules by connecting to your database and executing the following T-SQL statement:

EXECUTE sp_delete_database_firewall_rule 'MyDBFirewallRule'

Not the end of the world by any stretch, but if you are using Visual Studio as your deployment method for managing changes to your database, then having to do this step seems a little counter-intuitive. Fortunately, with a bit of creative thinking and utilisation of more complex T-SQL functionality, we can get around the issue by developing a script that carries out the following steps in order:

  • Retrieve a list of all current Database Firewall Rules.
  • Iterate through the list of rules and remove them all from the database.
  • Proceed to re-create the required Database Firewall Rules from scratch

The second step involves the use of a T-SQL function that I have traditionally steered away from using – Cursors. This is not because they are bad in any way but because a) I have previously struggled to understand how they work and b) have never found a good scenario in which they could be used in. The best way of understanding them is to put on your C# hat for a few moments and consider the following code snippet:

string[] array = new string[] { "Test1", "Test2", "Test3" }; 

foreach(string s in array)
    {
        Console.WriteLine(s);
    }
    

To summarise how the above works, we take our collection of values – Test1, Test2 and Test3 – and carry out a particular action against each; in this case, print out their value into the console. This, in a nutshell, is how Cursors work, and you have a great deal of versatility on what action you take during each iteration of the “loop”.

With a clear understanding of how Cursors work. the below script that accomplishes the aims set out above should hopefully be a lot clearer:

DECLARE @FirewallRule NVARCHAR(128)

DECLARE REMOVEFWRULES_CURSOR CURSOR
	LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT DISTINCT [name]
FROM sys.database_firewall_rules

OPEN REMOVEFWRULES_CURSOR
FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
WHILE @@FETCH_STATUS = 0
BEGIN
	EXECUTE sp_delete_database_firewall_rule @FirewallRule
	PRINT 'Firewall rule ' + @FirewallRule + ' has been successfully deleted.'
	FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule
END
CLOSE REMOVEFWRULES_CURSOR
DEALLOCATE REMOVEFWRULES_CURSOR

GO

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule1',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';

EXECUTE sp_set_database_firewall_rule @name = N'MyDBFirewallRule2',
		@start_ip_address = '1.2.3.4', @end_ip_address = '1.2.3.4';
		

To integrate as part of your existing database project, add a new Post-Deployment Script file and modify the above to reflect your requirements. As the name indicates, the script will run after all other aspects of your solution deployment has been completed. Now, the key caveat to bear in mind with this solution is that, during deployment, there will be a brief period of time where all Database Firewall Rules are removed from the database. This could potentially prevent any current database connections from dropping or failing to connect altogether. You should take care when using the above code snippet within a production environment and I would recommend you look at an alternative solution if your application/system cannot tolerate even a second of downtime.

Perhaps one of the most useful features at your disposal when working with Azure SQL Databases is the ability to integrate your Azure Active Directory (Azure AD) login accountsa la Windows Authentication for on-premise SQL Server. There are numerous benefits in shifting away from SQL Server-only user accounts in favour of Azure AD:

  • Ensures consistent login identities across multiple services.
  • Can enforce password complexity and refresh rules more easily.
  • Once configured, they behave exactly the same as standard SQL Server only logins.
  • Supports advanced usage scenarios involving Azure AD, such as multi-factor authentication and Single Sign-On (SSO) via Active Directory Federation Services (ADFS).

Setup can be completed in a pinch, although you will need to allocate a single/group of user(s) as the Active Directory admin for the Azure SQL Server. You may also choose to take due care and precautions when choosing your Active Directory admin(s); one suggestion would be to use a unique service account for the Active Directory admin, with a strong password, instead of granting such extensive privileges to normal user accounts.

Regardless of how you go about configuring the feature, I would recommend using it where-ever you can, for both internal purposes and also for anyone who wishes to access your SQL Server from an external directory. This second scenario is, you may be surprised to hear, fully supported. It assumes, first off, that you have added this account to your directory as a Guest/External User account. Then, you just follow the normal steps to get the account created on your Azure SQL Server.

There is one major “gotcha” to bear in mind when doing this. Let’s assume that you have added john.smith@domain.co.uk to the Azure AD tenant test.onmicrosoft.com. You then go to setup this account to access a SQL Server instance on the tenant. You will more than likely receive the following error message when using the example syntax below to create the account:

CREATE USER [john.smith@domain.co.uk] FROM EXTERNAL PROVIDER

The issue is, thankfully, simple to understand and fix. When External user accounts are added onto your Active Directory, despite having the same login name that derives from their source directory, they are stored in the new directory with a different UserPrincipalName (UPN). Consider the above example – the UPN in the source directory would be as follows:

john.smith@domain.co.uk

Whereas, as the Azure AD tenant name in this example is test.onmicrosoft.com, the UPN for the object would be:

john.smith_domain.co.uk#EXT#@test.onmicrosoft.com

I assume that this is done to prevent any UPN duplication across Microsoft’s no-doubt dizzying array of cloud Active Directory tenants and forests. In any event, knowing this, we can adjust our code above to suit – and successfully create our Database user account:

CREATE USER [john.smith_domain.co.uk#EXT#@test.onmicrosoft.com] FROM EXTERNAL PROVIDER

I guess this is one of those things where having at least a casual awareness of how other technologies within the Microsoft “stack” work can assist you greatly in troubleshooting what turn out to be simplistic errors in your code. Frustrating all the same, but we can claim knowledge of an obscure piece of Azure AD trivia as our end result 🙂