If you are looking for an easy-to-use and highly expandable mail relay service, SendGrid represents the most developer-friendly solution out in the market today. What’s even better is that it’s available on Azure, making it the ideal choice if you are developing an existing solution on the Azure stack. The best thing I like about the service is the extensive documentation covering every aspect of its Web API, structured to provide a clear explanation of endpoint methods, required properties, and example outputs – exactly the right way that all technical documentation should be laid out.

I recently had a requirement to integrate with the SendGrid API to extrapolate email statistic information into a SQL database. My initial thoughts were that I would need to resort to a bespoke C# solution to achieve these requirements. However, keenly remembering my commitment this year to find opportunities to utilise the service more, I decided to investigate whether Microsoft Flow could streamline this process. Suffice to say, I was pleasantly surprised and what I wanted to do as part of this week’s blog post was demonstrate how I was able to take advantage of Microsoft Flow to deliver my requirements. In the process, I hope to get you thinking about how you approach integration requirements in the future, challenging some of the pre-conceptions around this.

Before we get into creating the Flow itself…

…you will need to create a table within your SQL database to store the requisite data. This script should do the trick:

CREATE TABLE [dbo].[SendGridStatistic]
(
	[SendGridStatisticUID] [uniqueidentifier] NULL DEFAULT NEWID(),
	[Date] DATE NOT NULL,
	[CategoryName] VARCHAR(50) NULL,
	[Blocks] FLOAT NOT NULL,
	[BounceDrops] FLOAT NOT NULL,
	[Bounces] FLOAT NOT NULL,
	[Clicks] FLOAT NOT NULL,
	[Deferred] FLOAT NOT NULL,
	[Delivered] FLOAT NOT NULL,
	[InvalidEmail] FLOAT NOT NULL,
	[Opens] FLOAT NOT NULL,
	[Processed] FLOAT NOT NULL,
	[SpamReportDrops] FLOAT NOT NULL,
	[SpamReports] FLOAT NOT NULL,
	[UniqueClicks] FLOAT NOT NULL,
	[UniqueOpens] FLOAT NOT NULL,
	[UnsubscribeDrops] FLOAT NOT NULL,
	[Unsubscribes] FLOAT NOT NULL
)

A few things to point out with the above:

  • The CategoryName field is only required if you are wishing to return statistic information grouped by category from the API. The example that follows primarily covers this scenario, but I will also demonstrate how to return consolidated statistic information as well if you wanted to exclude this column.
  • Microsoft Flow will only be able to map the individual statistic count values to FLOAT fields. If you attempt to use an INT, BIGINT etc. data type, then the option to map these fields will not appear. Kind of annoying, given that FLOATs are effectively “dirty”, imprecise numbers, but given the fact we are not working with decimal numbers, this shouldn’t cause any real problems.
  • The SendGridStatisticUID is technically optional and could be replaced by an INT/IDENTITY seed instead or removed entirely. Remember though that it is always good practice to have a unique column value for each table, to aid in individual record operations.

In addition, you will also need to ensure you have generated an API key for SendGrid that has sufficient privileges to access the Statistic information for your account.

With everything ready, we can now “flow” quite nicely into building things out. The screenshot below demonstrates how the completed Flow should look from start to finish. The sections that follow will discuss what is required for each constituent element

Recurrence

The major boon when working with Flow is the diverse options you have for triggering them – either based on certain conditions within an application or just simply based off a recurring schedule. For this example, as we will be extracting statistic information for an entire 24 period, you should ensure that the Flow executes at least once daily. The precise timing of this is up to you, but for this example, I have suggested 2 AM local time each day. The configured recurrence settings should resemble the below if done correctly:

You should be aware that when your Flow is first activated, it will execute straightaway, regardless of what settings you have configured above.

HTTP

As the SendGrid Web API is an HTTP endpoint, we can utilise the built-in HTTP connector to retrieve the information we need. This is done via a GET operation, with authentication achieved via a Raw header value containing the API key generated earlier. The tricky bit comes when building the URI and how we want the Flow to retrieve our information – namely, all statistic information covering the previous day. There is also the (optional) requirement of ensuring that statistic information is grouped by category when retrieved. Fortunately, we can get around this problem by using a bit of Expression trickery to build a dynamic URI value each time the Flow is executed. The expression code to use will depend on whether or not you require category grouping. I have provided both examples below, so simply choose the one that meets your specific requirement:

Retrieve Consolidated Statistics
concat('https://api.sendgrid.com/v3/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')))
Retrieve Statistics Grouped By Category
concat('https://api.sendgrid.com/v3/categories/stats?start_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&end_date=', string(getPastTime(1, 'day', 'yyyy-MM-dd')), '&categories=cat1&categories=cat2')

Note: For this example, statistic information would be returned only for the categories that equal cat1cat2. These should be updated to suit your requirements, and you can add on additional categories by extending the URI value like so: &categories=cat3&categories=cat4 etc.

Your completed HTTP component should resemble the below if done correctly. Note in particular the requirement to have Bearer and a space before specifying your API key:

Parse JSON

A successful 200 response to the Web API endpoint will return a JSON object, listing all statistic information grouped by date (and category, if used). I always struggle when it comes to working with JSON – a symptom of working too long with relational databases I think – and they are always challenging for me when attempting to serialize result sets. Once again, Flow comes to the rescue by providing a Parse JSON component. This was introduced with what appears to be little fanfare last year, but really proves its capabilities in this scenario. The only bit you will need to worry about is providing a sample schema so that the service can properly interpret your data. The Use sample payload to generate schema option is the surest way of achieving this, and you can use the example payloads provided on the SendGrid website to facilitate this:

Retrieve Consolidated Statistics: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/global.html

Retrieve Statistics Grouped By Category: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/categories.html

An example screenshot is provided below in case you get stuck with this:

Getting the records into the database

Here’s where things get confusing…at least for me when I was building out this flow for the first time. When you attempt to add in an Insert row step to the flow and specify your input from the Parse JSON step, Microsoft Flow will automatically add two Apply to each step to properly handle the input. I can understand why this is the case, given that we are working with a nested JSON response, but it does provide an ample opportunity to revisit an internet meme of old…

Just pretend Xzibit is Microsoft Flow…

With the above ready and primed, you can begin to populate your Insert row step. Your first step here will more than likely be to configure your database connection settings using the + Add New Connection option:

The nicest thing about this is that you can utilise the on-premise gateway service to connect to a non-cloud database if required. Usual rules apply, regardless of where your database is located – use a minimum privileged account, configure any required IP whitelisting etc.

With your connection configured, all that’s left is to provide the name of your table and then perform a field mapping exercise from the JSON response. If you are utilising the SendGridStatisticUID field, then this should be left blank to ensure that the default constraint kicks in correctly on the database side:

The Proof is in the Pudding: Testing your Flow

All that’s left now is to test your Flow. As highlighted earlier in the post, your Flow will automatically execute after being enabled, meaning that you will be in a position to determine very quickly if things are working or not. Assuming everything executes OK, you can verify that your database table resembles the below example:

This example output utilises the CategoryName value, which will result in multiple data rows for each date, depending on the number of categories you are working with. This is why the SendGridStatisticUID is so handy for this scenario 🙂

Conclusions or Wot I Think

When it came to delivering the requirements as set out in this posts introduction, I cannot overemphasise how much Microsoft Flow saved my bacon. My initial scoping exercise around this strongly led me towards having to develop a fully bespoke solution in code, with additional work than required to deploy this out to a dedicated environment for continuous execution. This would have surely led to:

  • Increased deployment time
  • Additional cost for the provisioning of a dedicated execution environment
  • Wasted time/effort due to bug-fixing or unforeseen errors
  • Long-term problems resulting from maintaining a custom code base and ensuring that other colleagues within the business could properly interpret the code correctly.

Instead, I was able to design, build and fully test the above solution in less than 2 hours, utilising a platform that has a wide array of documentation and online support and which, for our particular scenario, did not result in any additional cost. And this, I feel, best summarises the value that Microsoft Flow can bring to the table. It overturns many of the assumptions that you generally have to make when implementing complex integration requirements, allowing you to instead focus on delivering an effective solution quickly and neatly. And, for those who need a bit more punch due to very highly specific business requirements, then Azure Logic Apps act as the perfect meeting ground for both sides of the spectrum. The next time you find yourself staring down the abyss of a seemingly impossible integration requirement, take a look at what Microsoft Flow can offer. You just might surprise yourself.

I started working with Azure Application Insights towards the end of last year, during a major project for an enterprise organisation. At the time, our principal requirement was to ensure that we could effectively track the usage patterns of a website, the pages visited, amount of time spent on each page etc. – all information types that, traditionally, you may turn to tools such as Google Analytics to generate. At the time, what swung the decision to go for Application Insights was the added value that the service provides from a developer standpoint, particularly if you are already heavily invested within .NET or Azure. The ability to view detailed information regarding errors on a web page, to automatically export this information out into Team Foundation Server/Team Services for further investigation and the very expendable and customisable manner in which data can be accessed or exported were all major benefits for our particular scenario. It’s a decision which I don’t think I would ever look back on and, even today, the product is still finding very neat and clever ways of surprising me 🙂

One of the other features that make Application Insights a more wholly expandable solution compared with Google Analytics is the ability to extend the amount of information that is scraped from your website page requests or visitors. These properties will then be viewable within Application Insights Analytics, as well as being exportable (as we will discover shortly). For example, if you are interested in determining the previous URL that a web user was on before visiting a page within a C# MVC website, create a new class file within your project called ReferrerTelemetryInitializer and add the following sample code:

using Microsoft.ApplicationInsights.Extensibility;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;

namespace MyNamespace.MyProject.MyProjectName.MVC.Utilities
{
    public class ReferrerTelemetryInitializer : ITelemetryInitializer
    {
        public void Initialize(ITelemetry telemetry)
        {
            if(telemetry is RequestTelemetry)
            {
                string referrer = HttpContext.Current.Request?.UrlReferrer?.ToString();
                telemetry.Context.Properties["Referrer"] = referrer;
            }
        }
    }
}

Be sure to add all required service references and rename your namespace value accordingly before deploying out. After this is done, the following query can then be executed within Application Insights Analytics to access the Referral information:

requests
| extend referrer = tostring(customDimensions.Referrer)
| project id, session_Id, user_Id, referrer

The extend portion of the query is required because Application Insights groups all custom property fields together into a key/value array. If you are working with other custom property fields, then you would just replace the value after the customDimensions. portion of the query with the property name declared within your code.

If you have very short-term and simplistic requirements for monitoring your website performance data, then the above solution will likely be sufficient for your purposes. But maybe you require data to be stored beyond the default 90-day retention limit or you have a requirement to incorporate the information as part of an existing database or reporting application. This is where the Continuous Export feature becomes really handy, by allowing you to continually stream all data that passes through the service to an Azure Storage Account. From there, you can look at configuring a Stream Analytics Job to parse the data and pass it through to your required location. Microsoft very handily provides two guides on how to get this data into a SQL database and also into a Stream Analytics Dashboard within Power BI.

What I like about Stream Analytics the most is that it utilises a SQL-like language when interacting with data. For recovering T-SQL addicts like myself, this can help overcome a major learning barrier when first getting to grips with the service. I would still highlight some caution, however, and recommend you have the online documentation for the Stream Analytics Query Language to hand, as there a few things that may catch you out. A good example of this is that whilst the language supports data type conversions via CAST operations, exactly like T-SQL, you are restricted to a limited list of data types as part of the conversion.

SQL developers may also encounter a major barrier when working with custom property data derived from Application Insights. Given the nature of how these are stored, there is specific Stream Analytics syntax that has to be utilised to access individual property values. We’ll take a closer look now to see just how this is done, continuing our earlier example utilising the Referrer field.

First of all, make sure you have configured an Input to your Request data from within Data Analytics. The settings should look similar to the image below if done correctly:

The full Path pattern will be the name of your Application Insights resource, a GUID value that can be found on the name of the Storage Account container utilised for Continuous Export, and then the path pattern that determines the folder name and the variables for date/time. It should look similar to this:

myapplicationinsightsaccount_16ef6b1f59944166bc37198a41c3fcf1/Requests/{date}/{time}

With your input configured correctly, you can now navigate to the Query tab and utilise the following query to return the same information that we accessed above within Application Insights:

SELECT 
    requestflat.ArrayValue.id AS RequestID,
    r.context.session.id AS SessionID,
	r.context.[user].anonId AS AnonymousID,
    GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 5), 'Referrer') AS ReferralURL,
INTO [RequestsOutput]
FROM [Requests] AS r
CROSS APPLY GetElements(r.[request]) AS requestflat

Now, its worth noting that the GetArrayElement function is reliant on a position value within the array to return the data correctly. For the example provided above, the position of the Referrer field is always the fifth key/value pair within the array. Therefore, it may be necessary to inspect the values within the context.custom.dimensions object to determine the position of your required field. In the above query, you could add the field r.context.custom.dimensions to your SELECT clause to facilitate any further interrogation.

Application Insights in of itself, arguably, provides feature-rich and expansive capabilities as a standalone web analytics tool. It certainly is a lot more “open” with respect to how you can access the data – a welcome design decision that is at the heart of a lot Microsoft products these days. When you start to combine Application Insights with Stream Analytics, a whole world of additional reporting capabilities and long-term analysis can be immediately unlocked for your web applications. Stream Analytics is also incredibly helpful for those who have a much more grounded background working with databases. Using the tool, you can quickly interact and convert data into the required format using a familiar language. It would be good to perhaps see, as part of some of the examples available online, tutorials on how to straightforwardly access Custom Dimensions properties, so as to make this task simpler to achieve. But this in and of itself does not detract away from how impressive I think both tools are – both as standalone products and combined together.

When working with web applications and Azure App Service, it may sometimes be necessary to delete or remove files from a website. Whether it is a deprecated feature or a bit of development “junk” that was accidentally left on your website, these files can often introduce processing overhead or even security vulnerabilities if left unattended. It is, therefore, good practice to ensure that these are removed during a deployment. Fortunately, this is where tools such as Web Deploy really come into their element. When using this from within Visual Studio via the Publish dialog box, you can specify to remove any file that does not exist within your Project via the File Publish Options section on the Settings tab:

There is also the Exclude files from the App_Data folder setting, which has a bearing on how the above operates, but we’ll come back to that shortly…

Whilst this feature is useful if you deploying out to a dev/test environment manually from Visual Studio, it is less so if you have implemented an automated release strategy via a tool such as Visual Studio Team Services (the cloud version of Team Foundation Services). This is where all steps as part of a deployment are programmatically defined and then re-used whenever a new release to an environment needs to be performed. Typically, this may involve some coding as part of a PowerShell script or similar, but what makes Team Services really effective is the ability to “drag and drop” common deployment actions that cover most release scenarios. Case in point – a specific task is provided out of the box to handle deployments to Azure App Service:

What’s even better is the fact that we have the same option at our disposal à la Visual Studio – although you would be forgiven for overlooking it given how neatly the settings are tucked away 🙂

Note that you have to specifically enable the option to Publish using Web Deploy for the Remove additional files at destination option to appear. It’s important, therefore, that you fully understand how Web Deploy works in comparison to other options, such as FTP deploy. I will think you will find, though, that the list of benefits far outweighs any negatives. In fact, the only drawback of using this option is that you must be using Microsoft “approved” tools, such as Visual Studio, to facilitate.

We saw earlier in this post the option for Exclude files from the App_Data folder setting. Typically, you may be using this folder as some form of local file store for configuration data or similar. It is also the location where any WebJobs configured for your website would be stored. With the Exclude files from the App_Data folder setting enabled, you may assume that Web Deploy will indiscriminately delete all files residing within the App_Data directory. Luckily, the automated task instead throws an error if it detects any files within the directory that may be affected by the delete operation:

Helpful in the sense that the task is not deleting files which could be potentially important, frustrating in that the deployment will not complete successfully! As you may have already guessed, enabling the Exclude files from the App_Data folder setting in Visual Studio/on the Team Services task gets around this issue:

You can then sit back and verify as part of the Team Services Logs that any file not in your source project is deleted successfully during deployment:

Manual deployments to Production systems can be fraught with countless hidden dangers – the potential for an accidental action chief among them, but also the risk of outdated components of a system not being flagged up and removed as part of a release cycle. Automating deployments go a long way in taking human error out of this equation and, with the inclusion of this handy feature to remove files not within your source code during the deployment, also negates the need for any manual intervention after a deployment to rectify any potential issues. If you are still toying with introducing fully automated deployments within your environment, then I would urge wholeheartedly to commit the effort towards achieving this outcome. Get in touch if you need any help with this, and I would be happy to lend some assistance 🙂

When working with Virtual Machines (VM’s) on Azure that have been deployed using a Linux Operating System (Ubuntu, Debian, Red Hat etc.), you would be forgiven for assuming that the experience would be rocky when compared with working with Windows VM’s. Far from it – you can expect an equivalent level of support for features such as full disk encryption, backup and administrator credentials management directly within the Azure portal, to name but a few. Granted, there may be some difference in the availability of certain extensions when compared with a Windows VM, but the experience on balance is comparable and makes the management of Linux VM’s a less onerous journey if your background is firmly rooted with Windows operating systems.

To ensure that the Azure platform can effectively do some of the tasks listed above, the Azure Linux Agent is deployed to all newly created VM’s. Written in Python and supporting a wide variety of common Linux distributable OS’s, I would recommend never removing this from your Linux VM; no matter how tempting this may be. The tradeoff could cause potentially debilitating consequences within Production environments. A good example of this would be if you ever needed to add an additional disk to your VM. This task would become impossible to achieve without the Agent present on your VM. As well as having the proper reverence for the Agent, it’s important to keep an eye on how the service is performing, even more so if you are using Recovery Services vaults to take snapshots of your VM and back up to another location. Otherwise, you may start to see errors like the one below being generated whenever a backup job attempts to complete:

It’s highly possible that many administrators are seeing this error at the time of writing this post (January 2018). The recent disclosures around Intel CPU vulnerabilities have prompted many cloud vendors, including Microsoft, to roll out emergency patches across their entire data centres to address the underlying vulnerability. Whilst it is commendable that cloud vendors have acted promptly to address this flaw, the pace of the work and the dizzying array of resources affected has doubtless led to some issues that could not have been foreseen from the outset. I believe, based on some of the available evidence (more on this later), that one of the unintended consequences of this work is the rise of the above error message with the Azure Linux Agent.

When attempting to deal with this error myself, there were a few different steps I had to try before the backup started working correctly. I have collected all of these below and, if you find yourself in the same boat, one or all of them should resolve the issue for you.

First, ensure that you are on the latest version of the Agent

This one probably goes without saying as it’s generally the most common answer to any error 🙂 However, the steps for deploying updates out onto a Linux machine can be more complex, particularly if your primary method of accessing the machine is via an SSH terminal. The process for updating your Agent depends on your Linux version – this article provides instructions for the most common variants. In addition, it is highly recommended that the auto-update feature is enabled, as described in the article.

Verify that the Agent service starts successfully.

It’s possible that the service is not running on your Linux VM. This can be confirmed by executing the following command:

service walinuxagent start

Be sure to have your administrator credentials handy, as you will need to authenticate to successfully start the service. You can then check that the service is running by executing the ps -e command.

Clear out the Agent cache files

The Agent stores a lot of XML cache files within the /var/lib/waagent/ folder, which can sometimes cause issues with the Agent. Microsoft has specifically recommended that the following command is executed on your machine after the 4th January if you are experiencing issues:

sudo rm -f /var/lib/waagent/*.[0-9]*.xml

The command will delete all “old” XML files within the above folder. A restart of the service should not be required. The date mentioned above links back to the theory suggested earlier in this post that the Intel chipset issues and this error message are linked in some way, as the dates seem to tie with when news first broke regarding the vulnerability.

If you are still having problems

Read through the entirety of this article, and try all of the steps suggested – including digging around in the log files, if required. If all else fails, open a support request directly with Microsoft.

Hopefully, by following the above steps, your backups are now working again without issue 🙂

When it comes to technology learning, it can often feel as if you are fighting against a constant wave of change, as studying is outpaced by the introduction of new technical innovations. Fighting the tide is often the most desirous outcome to work towards, but it can be understandable why individuals choose to specialise in a particular technology area. There is no doubt some comfort in becoming a subject matter expert and in not having to worry about “keeping up with the Joneses”. However, when working with an application such as Dynamics 365 for Customer Engagement (D365CE), I would argue it is almost impossible to ignore the wider context of what sit’s alongside the application, particularly Azure, Microsoft’s cloud as a service platform. Being able to understand how the application can be extended via external integrations is typically high on the list of any project requirements, and often these integrations require a light-touch Azure involvement, at a minimum. Therefore, the ability to say that you are confident in accomplishing certain key tasks within Azure instantly puts you ahead of others and in a position to support your business/clients more straightforwardly.

Here are 4 good reasons why you should start to familiarise yourself with Azure, if you haven’t done so already, or dedicate some additional time towards increasing your knowledge in an appropriate area:

Dynamics 365 for Customer Engagement is an Azure application

Well…we can perhaps not answer this definitively and say that 100% of D365CE is hosted on Azure (I did hear a rumour that some aspects of the infrastructure were hosted on AWS). Certainly, for instances that are provisioned within the UK, there is ample evidence to suggest this to be the case. What can be said with some degree of certainty is that D365CE is an Azure leveraged application. This is because it uses key aspects of the service to deliver various functionality within the application:

  • Azure Active Directory: Arguably the crux of D365CE is the security/identity aspect, all of which is powered using Microsoft’s cloud version of Active Directory.
  • Azure Key Vault: Encryption is enabled by default on all D365CE databases, and the management of encryption keys is provided via Azure Key Vault.
  • Office 365: Similar to D365CE, Office 365 is – technically – an Azure cloud service provided by Microsoft. As both Office 365 and D365CE often need to be tightly knitted together, via features such as Server-Side Synchronisation, Office 365 Groups and SharePoint document management, it can be considered a de facto part of the base application.

It’s fairly evident, therefore, that D365CE can be considered as a Software as a Service (SaaS) application hosted on Azure. But why is all this important? For the simple reason that, because as a D365CE professional, you will be supporting the full breadth of the application and all it entails, you are already an Azure professional by default. Not having a cursory understanding of Azure and what it can offer will immediately put you a detriment to others who do, and increasingly places you in a position where your D365CE expertise is severely blunted.

It proves to prospective employers that you are not just a one trick pony

When it comes to interviews for roles focused around D365CE, I’ve been at both sides of the table. What I’ve found separates a good D365CE CV from an excellent one all boils down to how effectively the candidate has been able to expand their knowledge into the other areas. How much additional knowledge of other applications, programming languages etc. does the candidate bring to the business? How effectively has the candidate moved out of their comfort zone in the past in exploring new technologies, either in their current roles or outside of work? More importantly, how much initiative and passion has the candidate shown in embracing changes? A candidate who is able to answer these questions positively and is able to attribute, for example, extensive knowledge of Azure will instantly move up in my estimation of their ability. On the flip side of this, I believe that interviews that have resulted in a job offer for me have been helped, in no small part, to the additional technical skills that I can make available to a prospective employer.

To get certain things done involving D365CE, Azure knowledge is a mandatory requirement

I’ve talked about one of these tasks before on the blog, namely, how to setup the Azure Data Export solution to automatically synchronise your application data to an Azure SQL Database. Unless you are in the fortunate position of having an Azure savvy colleague who can assist you with this, the only way you are going to be able to successfully complete this task is to know how to deploy an Azure SQL Server instance, a database for this instance and the process for setting up an Azure Key Vault. Having at least some familiarity with how to deploy simple resources in Azure and accomplish tasks via PowerShell script execution will place you in an excellent position to achieve the requirements of this task, and others such as:

The above is just a flavour of some of the things you can do with D365CE and Azure together, and there are doubtless many more I have missed 🙂 The key point I would highlight is that you should not just naively assume that D365CE is containerised away from Azure; in fact, often the clearest and cleanest way of achieving more complex business/technical requirements will require a detailed consideration of what can be built out within Azure.

There’s really no good reason not to, thanks to the wealth of resources available online for Azure training.

A sea change seems to be occurring currently at Microsoft with respect to online documentation/training resources. Previously, TechNet and MSDN would be your go-to resources to find out how something Microsoft related works. Now, the Microsoft Docs website is where you can find the vast majority of technical documentation. I really rate the new experience that Microsoft Docs provides, and there now seems to be a concerted effort to ensure that these articles are clear, easy to follow and include end-to-end steps on how to complete certain tasks. This is certainly the case for Azure and, with this in mind, I defy anyone to find a reasonable enough excuse not to begin reading through these articles. They are the quickest way towards expanding your knowledge within an area of Azure that interests you the most or to help prepare you to, for example, setup a new Azure SQL database from scratch.

For those who learn better via visual tools, Microsoft has also greatly expanded the number of online video courses available for Azure, that can be accessed for free. There are also some excellent, “deep-dive” topic areas that can also be used to help prepare you for Azure certification.

Conclusions or Wot I Think

I use the term “D365CE professional” a number of times throughout this post. This is a perhaps unhelpful label to ascribe to anyone working with D365CE today. A far better title is, I would argue, “Microsoft cloud professional”, as this gets to the heart of what I think anyone who considers themselves a D365CE “expert” should be. Building and supporting solutions within D365CE is by no means an isolated experience, as you might have argued a few years back. Rather, the onus is on ensuring that consultants, developers etc. are as multi-faceted as possible from a skillset perspective. I talked previously on the blog about becoming a swiss army knife in D365CE. Whilst this is still a noble and recommended goal, I believe casting the net wider can offer a number of benefits not just for yourself, but for the businesses and clients you work with every day. It puts you centre-forward in being able to offer the latest opportunities to implement solutions that can increase efficiency, reduce costs and deliver positive end-user experiences. And, perhaps most importantly, it means you can confidently and accurately attest to your wide-ranging expertise in any given situation.