Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective – to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today’s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn’t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.

My fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:

As you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides – which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.

This is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:

Currently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.

We can verify this by running a quick GETDATE() query and comparing it against the current time during BST:

(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)

The result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.

On all versions of SQL Server, Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:

SELECT *
FROM sys.time_zone_info
ORDER BY [name]

As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.

With the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover – AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:

SELECT GETDATE() AT TIME ZONE 'GMT Standard Time', @@VERSION

While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:

SELECT GETDATE() AS CurrentUTCDateTime,
	   GETDATE() AT TIME ZONE 'GMT Standard Time' AS CurrentUTCDateTimeWithGMTOffset,
	   DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE 'GMT Standard Time'), GETDATE()) AS CurrentGMTDateTime,
	   @@VERSION AS SQLVersion
	   
	   

Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:

So as we can see, a few lines of codes means that we can straightforwardly work with data in our desired locale. 🙂

It does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level & consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be – easy and straightforward to resolve.

When it comes to handling large datasets in a formal, structured and highly scalable manner, nothing beats SQL Server. Having worked with the product for almost six years, I always look forward to the opportunity of putting together some SQL queries or to build out a new database. I think of it as a nice little treat, a reward amongst the midst of other, sometimes tedious, tasks that I have to deal with on a weekly basis. I rank knowledge of SQL Server pretty highly if you are aiming to become a serious Dynamics CRM/365 for Customer Engagement professional and I credit my experience with SQL as one of the things that helped to streamline my journey into Dynamics.

It may be, however, that others are not as keen at the prospect of working with SQL databases, particularly when it is unable to accommodate some of the alternative data storage mechanisms that are commonplace with application developers. A good example of this is JSON (JavaScript Object Notation), a format that is used widely today as a portable and “easy to read” mechanism of transferring data. For someone who is more used to working with SQL, getting your head around JSON can be a bit of a challenge at first and – traditionally – was not something that Microsoft factored into the design of SQL Server. A lot has changed with Microsoft – to the extent that services such as Microsoft Flow and Azure now use JSON extensively – and with SQL Server, as a whole host of related functions were added to SQL Server 2014 to provide straightforward conversions into JSON. The FOR JSON PATH clause is the primary gateway into this world and is a function which I have slowly, but surely, started to get my head around. What I wanted to do in this week’s blog post was provide a “crash course” on how to use this nifty piece of functionality, hopefully with the aim of giving you everything you need to start utilising it in your environment(s).

Before we begin…

To best illustrate how the clause works in practice, it is necessary to create an appropriate set of inter-related tables within SQL Server, that will be used for all examples that follow. Here’s one I (rather unimaginatively) made earlier:

The code snippets to create them can be found below:

CREATE TABLE dbo.[ParentTable]
(
	ParentTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[ChildTable1]
(
	ChildTable1ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

CREATE TABLE dbo.[ChildTable2]
(
	ChildTable2ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

CREATE TABLE dbo.[GrandchildTable]
(
	GrandchildTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL,
	ChildTable2ID INT FOREIGN KEY REFERENCES dbo.[ChildTable2](ChildTable2ID) NULL,
	Field1	BIT		NOT NULL,
	Field2	CHAR(10)	NULL,
	Field3	VARCHAR(MAX) NULL
)

GO

The table structures are incredibly basic, but note, in particular, the FOREIGN KEY relationships from the 2 Child Tables to the Parent and also the additional parent/child relationship between the GrandchildTable and ChildTable2. You will also need to look at inserting some test data into the tables to properly follow through the rest of this post.

With our environment prepped, let’s take a look at the different ways we can convert our dataset into JSON format, with minimal effort involved.

Example 1: FOR JSON AUTO

If we were to look at doing a straightforward SELECT * query on all our tables, our query and expected output might look something like this:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID

Our main issue with this query is that, because of how T-SQL works, the 2 ParentTable records are returned for every child and grandchild record. For a client application, this can be somewhat cumbersome to handle. FOR JSON AUTO can be straightforwardly added to the end of the above query to convert the query output accordingly:

SELECT *
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON AUTO

//Example output of the first 25 lines below:

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": "Test      ",
    "Field3": "This is a test record",
    "CT1": [{
        "ChildTable1ID": 1,
        "ParentTableID": 1,
        "Field1": true,
        "Field2": "Test      ",
        "Field3": "This is a test record",
        "CT2": [{
            "ChildTable2ID": 1,
            "ParentTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record",
            "GT": [{
                "GrandchildTableID": 1,
                "ChildTable2ID": 1,
                "Field1": false,
                "Field2": "Test      ",
                "Field3": "This is a test record"
            }]
       ...

This output provides a much more sensible structure, with no record duplication and proper nesting of child records.

Example 2: FOR JSON PATH, ROOT

With some modifications to the above query, it is also possible to specify names for each root element. This can be tailored depending on your specific requirements. For example, let’s say we had to provide the following root element names for each of the example tables:

  • dbo.ParentTable -> Parent
  • dbo.ChildTable1 -> FirstChildTable
  • dbo.ChildTable2 -> SecondChildTable

The following query would achieve these requirements, in addition to adding a master root element name of MyTestSQLJSONObject:

SELECT PT.ParentTableID AS [Parent.ParentTableID], PT.Field1 AS [Parent.Field1], PT.Field2 AS [Parent.Field2], PT.Field3 AS [Parent.Field3],
	   ChildTable1ID AS [FirstChildTable.ChildTable1ID], CT1.Field1 AS [FirstChildTable.Field1], CT1.Field2 AS [FirstChildTable.Field2], CT1.Field3 AS [FirstChildTable.Field3],
	   CT2.ChildTable2ID AS [SecondChildTable.ChildTable1ID], CT2.Field1 AS [SecondChildTable.Field1], CT2.Field2 AS [SecondChildTable.Field2], CT2.Field3 AS [SecondChildTable.Field3],
	   GT.GrandchildTableID AS [GrandchildTable.GrandchildTableID], GT.Field1 AS [GrandchildTable.Field1], CT2.Field2 AS [GrandchildTable.Field2], CT2.Field3 AS [GrandchildTable.Field3]
FROM dbo.ParentTable AS PT
 INNER JOIN dbo.ChildTable1 AS CT1
  ON PT.ParentTableID = CT1.ParentTableID
 INNER JOIN dbo.ChildTable2 AS CT2
  ON PT.ParentTableID = CT2.ParentTableID
 INNER JOIN dbo.GrandchildTable AS GT
  ON CT2.ChildTable2ID = GT.ChildTable2ID
FOR JSON PATH, ROOT('MyTestSQLJSONObject')

//Example of first 25 lines below:

{
    "MyTestSQLJSONObject": [{
        "Parent": {
            "ParentTableID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "FirstChildTable": {
            "ChildTable1ID": 1,
            "Field1": true,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "SecondChildTable": {
            "ChildTable1ID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        },
        "GrandchildTable": {
            "GrandchildTableID": 1,
            "Field1": false,
            "Field2": "Test      ",
            "Field3": "This is a test record"
        ...

Example 3: NULL Field Values

One thing worth bearing in mind when working with the FOR JSON clause is how NULL field values behave. Take a look at the following example query output from dbo.ParentTable:

When attempting to query this single record using the FOR JSON AUTO clause, we get the following output:

//Example output below. Notice that no field name/value is returned for Field2 now

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field3": "This is a test record"
}]

If you have a requirement always to return a value for every NULL field, then you can use the INCLUDE_NULL_VALUES option to get around this:

SELECT *
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO, INCLUDE_NULL_VALUES
//Notice now that Field2 returns as expected

[{
    "ParentTableID": 1,
    "Field1": true,
    "Field2": null,
    "Field3": "This is a test record"
}]

Even with this option specified, there may still be issues with outputting this field with a value of null. In these scenarios, on a T-SQL side, you would generally use the ISNULL function to replace NULL values with an empty string. Further, because the field type in this example is a CHAR(10) data type, there are ten characters of whitespace that need removing from the output string. The following query will fix both of these problems:

SELECT ParentTableID, Field1, LTRIM(ISNULL(Field2, '')) AS Field2, Field3
FROM dbo.ParentTable AS PT
WHERE PT.ParentTableID = 1
FOR JSON AUTO

Example 4: Using sub-queries to return child records as JSON objects

In most cases involving parent and child records, returning all the data as JSON may not be required. Instead, it may be necessary to return the fields from the parent record only, and all child records as a single JSON object field on the parent record. Using Subqueries, we can accommodate this scenario as follows

SELECT PT.ParentTableID, PT.Field1, PT.Field2, PT.Field3,
(
	SELECT *
	FROM dbo.ChildTable1 AS CT1
	WHERE CT1.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable1,
(
	SELECT *
	FROM dbo.ChildTable2 AS CT2
	 INNER JOIN dbo.GrandchildTable AS GT
	  ON CT2.ChildTable2ID = GT.ChildTable2ID
	WHERE CT2.ParentTableID = PT.ParentTableID
	FOR JSON AUTO
) AS ChildTable2
FROM dbo.ParentTable AS PT

Example 5: Storing FOR JSON Query Output in Parameters

In most scenarios, you will generally provide functions or Stored Procedures for developers to interface with when connecting to the database. It is in this situation where the ability to store the output of any query – including those that use the FOR JSON clause – within a parameter becomes very useful. The following snippet will store the output of a FOR JSON query into a parameter called @JSON, which is then retrievable at any time via a SELECT query:

DECLARE @JSON NVARCHAR(MAX) = (SELECT * FROM dbo.ChildTable2 AS CT2 INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON AUTO)

SELECT @JSON

Wrapping Up: A Few Things to Bear in Mind

  • The FOR JSON clause is not compatible with Common Table Expressions (CTE’s).
  • When storing the output of a FOR JSON query in a parameter, you have no native capability to query the inner contents of the JSON object via a SELECT…WHERE query.
  • Because there is no way of specifying the name of the single column that returns as part of a FOR JSON query, you cannot create a view that uses this clause.

I hope that this post has given you all the information and guidance needed to start working with FOR JSON clauses in your queries. The ability to straightforwardly convert SQL tabular data into a format that most developers would readily scream for is a convenient feature to have at our disposal and is indicative of how far the SQL Server product has moved forward in recent years. Hopefully, this feature will save you some hassle in the future and will mean that you can make friends with your developer colleagues 🙂

I’ve gone on record previously saying how highly I rate the Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) community. Out of all the groups I have been a part of in the past, you couldn’t ask for a more diverse, highly passionate and – most importantly of all – helpful community. There are a lot of talented individuals out there which put a metric tonne of effort into providing the necessary tools, know-how and support to make our daily journey with CRM/D365CE that much easier to traverse.

An excellent case in point comes from the CRM DevOps extraordinaire himself, Ben Walker, who reached out me regarding my recent post on default SiteMap areas vanishing mysteriously. Now, when you are working with tools like XrmToolbox, day in, day out, the propensity towards generating facepalm moments for not noticing apparent things can increase exponentially over time. With this in mind, Ben has very kindly demonstrated a much more simplistic way of restoring missing SiteMap areas and, as he very rightly points out, the amount of hassle and time-saving the XrmToolbox can provide when you fully understand its capabilities. With this in mind, let’s revisit the scenario discussed in the previous post and go through the insanely better approach to solving this issue:

  1. Download and run XrmToolbox and select the SiteMap Editor app, logging into your CRM/D365CE instance when prompted:

After logging in, you should see a screen similar to the below:

  1. Click on the Load SiteMap button to load the SiteMap definition for the instance you are connected to. It should bear some resemblance to the below when loaded:

  1. Expand the Area (Settings) node. It should resemble the below (i.e. no Group for Process Center):

  1. Right click on the Area (Settings) node and select Add Default SiteMap Area button. Clicking this will launch the SiteMap Component Picker window, which lists all of the sitemap components included by default in the application. Scroll down, select the ProcessCenter option. Then, after ticking the Add child components too checkbox, press OK. The SiteMap Editor will then add on the entire group node for the ProcessCenter, including all child nodes:

  1. When you are ready, click on the Update SiteMap button and wait until the changes upload/publish into the application. You can then log onto CRM/D365CE to verify that the new area has appeared successfully.

I love this alternative solution for a number of reasons. There are fewer steps involved, there is no requirement to resort to messing around with the SiteMap XML files (which has its own set of potential pitfalls, if done incorrectly) and the solution very much looks and feels like a “factory reset”, without any risk of removing other custom SiteMap areas that you may have added for alternate requirements. A huge thanks to Ben for reaching out and sharing this nifty solution and for rightly demonstrating how fantastic the CRM/D365CE community is 🙂

A vital part of any DevOps automation activity is to facilitate automatic builds of code projects on regular cycles. In larger teams, this becomes particularly desirable for a multitude of reasons:

  • Provides a means of ensuring that builds do not contain any glaring code errors that prevent a successful compile from taking place.
  • Enables builds to be triggered in a central, “master” location, that all developers are regularly shipping code to.
  • When incorporated as part of other steps during the build stage, other automation tasks can be bolted on straightforwardly – such as the running of Unit Tests and deployment of resources to development environment(s).

The great news is that, when working with either Visual Studio Team Services or Team Foundation Server (VSTS/TFS), the process of setting up the first automated build definition of your project is straightforward. All steps can be completed directly within the GUI interface and – although some of the detailed configuration settings can be a little daunting when reviewing them for the first time – the drag and drop interface means that you can quickly build consistent definitions that are easy to understand at a glance.

One such detailed configuration setting relates to your choice of Build and Release Agent. To provide the ability to carry out automated builds (and releases), VSTS/TFS requires a dedicated agent machine designated that can be used to execute all required tasks on. There are two flavours of Build and Release Agents:

  • Microsoft Hosted: Fairly self-explanatory, this is the most logical choice if your requirements are fairly standard – for example, a simple build/release definition for an MVC ASP.NET application. Microsoft provides a range of different Build and Release Agents, covering different operating system vendors and versions of Visual Studio.
  • Self-Hosted: In some cases, you may require access to highly bespoke modules or third-party applications to ensure that your build/release definitions complete successfully. A good example may be a non-Microsoft PowerShell cmdlet library. Or, it could be that you have strict business requirements around the storage of deployment resources. This is where Self-Hosted agents come into play. By installing the latest version of the Agent onto a computer of your choice – Windows, macOS or Linux – you can then use this machine as part of your builds/releases within both VSTS & TFS. You can also take this further by setting up as many different Agent machines as you like and then group these into a “pool”, thereby allowing concurrent jobs and enhanced scalability.

The necessary trade-off when using Self-Hosted agents is that you must manage and maintain the machine yourself – for example, you will need to install a valid version of Visual Studio and SQL Server Data Tools if you wish to build SQL Server Database projects. What’s more, if issues start to occur, you are on your own (to a large extent) when diagnosing and resolving them. One such problem you may find is with permissions on the build agent, with variants of the following error that may crop up from time to time during your builds:

The error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the VSTS directory. When VSTS / TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed. I have observed that, even if the execution account on the Build Agent machine has sufficient privileges to overwrite files in the directory, you will still run into this issue. The best resolution I have found – in all cases to date – is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\VSTS\_work\SourceRootMapping\5dc5253c-785c-4de1-b722-e936d359879c\13\SourceFolder.json) and delete the file/folder in question. Removing the offending items will effectively “clean slate” the next Build definition execution, which should then complete without issue.

We are regularly told these days of the numerous benefits that “going serverless” can bring to the table, including, but not limited to, reduced management overhead, reduced cost of ownership and faster adoption of newer technology. The great challenge with all of this is that, because no two businesses are typically the same, there is often a requirement to boot up a Virtual Machine and run a specified app within a full server environment, so that we can achieve the level of required functionality to suit our business scenario. Self-Hosted agents are an excellent example of this concept in practice, and one that is hard to prevent from being regularly utilised, irrespective of how vexatious this may make us. While the ability to use Microsoft Hosted Build and Release Agents is greatly welcome (especially given there is no cost involved), it would be nice to see if this could be “opened up” to allow additional Agent machine tailoring for specific situations. I’m not going to hold my breath in this regard though – if I were in Microsoft’s shoes, I would shudder at the thought of allowing complete strangers the ability to deploy and run custom libraries on my critical LOB application. It’s probably asking for more trouble than the convenience it would provide 🙂

Cybersecurity should be an ongoing concern for any organisation, regardless of its size and complexity. This is chiefly for two essential business reasons:

  1. A cybersecurity incident or breach could, depending on its severity, result in significant reputational or financial damage if not adequately safeguarded against or handled correctly.
  2. When judging whether to award a contract to a business for a critical function, the awarding organisation will typically need to assuage themselves of any risk associated with placing this activity “outside the garden fence”. Cybersecurity is one aspect of assessing this risk, usually focused towards understanding what controls, policies and procedures exist within a business to ensure that sensitive data is handled appropriately.

Traditionally, to adequately demonstrate sufficient competence in this area, the ISO 27001 standard acts as a watermark to indicate that proper information security management systems are in place within a business. Many routes are currently available towards achieving this accreditation. Its adoption can involve many complicated and highly integrated business changes which, for smaller organisations, may prove to be a significant challenge to put in place – laying aside any cost implications.

In recognition of this fact and as a general acknowledgement towards the increased risk the “internet age” brings to supplier/customer relationships (particularly in the public sector), the UK Government launched the Cyber Essentials scheme back in June 2014. Aimed at organisations of any size, it promises to provide the necessary help and reassurance that your business/organisation has put the necessary steps in place to ‘…protect…against common online threats’, affording the opportunity to advertise this fact to all and sundry.

I’ve been through the process of successfully attaining the standard within organisations over the past few years, so I wanted to share some of my thoughts relating to the scheme, alongside some tips to help you along the way if you are contemplating adopting the scheme in the near future.

To begin with, I wanted to provide a detailed overview of the scheme, with some reasons why it may be something your organisation should consider.

Cyber Essentials is structured as a tiered scheme, with two certification levels available, which differ significantly in their level of rigorousness:

  • Cyber Essentials: Sometimes referred to as “Cyber Essentials Basic“, this level of the standard is designed to assess your current IT infrastructure and internal processes, via a self-assessment questionnaire. The answers are then reviewed and marked against the standard.
  • Cyber Essentials +: Using the answers provided during the Basic accreditation process, a more thorough assessment is carried out on your network by an external organisation, taking the form of a mini-penetration test of your infrastructure.

You can read in further detail on the scheme’s website regarding each level. It should be noted, even if it may go without saying, that you must be Cyber Essentials Basic accredited before you can apply for the + accreditation. Both tiers of the standard also require renewal annually.

Whether your organisation needs the scheme or not depends on your industry focus and, in particular, your appetite for working within the public sector. As noted on the GOV.UK website:

From 1 October 2014, Government requires all suppliers bidding for contracts involving the handling of certain sensitive and personal information to be certified against the Cyber Essentials scheme.

Its requirement has also spread itself further from there into some areas of the private sector. For example, I have seen tenders/contracts in recent times explicitly asking for Cyber Essentials + as a minimum requirement for any suppliers. In short, you should be giving some thought towards the scheme if you do not have anything existing in place and if you have a desire to complete public sector work in the very near future.

What You Can Expect

The exact process will differ depending on which accreditation body you work with, but the outline process remains the same for both levels of the scheme:

  • For the Basic, you will be asked to complete and return answers to the self-assessment question list. Responses will then be scored based on a Red, Amber, Green (RAG) scoring system, with full justifications for each score provided. Depending on the number and severity of issues involved, an opportunity to implement any required changes and resubmit your answers may be given at no additional cost; otherwise, failure will mean that you will have to apply to complete the questionnaire again for an additional fee. Turnaround for completed responses has been relatively quick in my experience, with the upshot being that you could potentially get the accreditation in place within a few weeks or less. For those who may be worried about the contents of the questionnaire, the good news is that you can download a sample question list at any time to evaluate your organisation’s readiness.
  • As hinted towards already, the + scheme is a lot more involved – and costly – to implement. You will be required to allow an information security consultant access to a representative sample of your IT network (including servers and PC/Mac endpoints), for both internal and external testing. The consultant will need to be given access to your premises to carry out this work, using a vulnerability assessment tool of their choosing. There will also be a requirement to evidence any system or process that you have attested towards as part of the Basic assessment (e.g. if you are using Microsoft Intune for Mobile Device Management, you may be required to export a report listing all supervised devices and demonstrate a currently supervised device). It is almost a certainty that there will be some remedial work that needs to take place resulting from any scan, most likely amounting to the installation of any missing security updates. Previously, you were granted a “reasonable” period to complete these actions; for 2018, the scheme now requires that all corrective actions are completed within 30 days of the on-site assessment taking place. Once this is done and evidenced accordingly, a final report will be sent, noting any additional observations, alongside confirmation of successfully attaining the + accreditation.

Costs will vary, but if you are paying any more than £300 for the Basic or £1,500 + VAT for the + accreditation, then I would suggest you shop around. 🙂

Is it worth it?

As there is a cost associated towards all of this, there will need to a reasonable business justification to warrant this spend. The simple fact that you may now be required to contract with organisations who mandate this standard being in place is all the justification you may need, especially if the contract is of sufficiently high value. Or it could be that you wish to start working within the public sector. In both scenarios, the adoption of the standard seems like a no-brainer option if you can anticipate any work to be worth in excess of £2,000 each year.

Beyond this, when judging the value of something, it is often best to consider the impact or positive change that it can bring to the table. Indeed, in my experience, I have been able to drive forward significant IT infrastructure investments off the back of adopting the scheme. Which is great…but not so much from a cost standpoint. You, therefore, need to think carefully, based on what the standard is looking for, on any additional investment required to ensure compliance towards it. For example, if your organisation currently does not have Multi-Factor Authentication in place for all users, you will need to look at the license and time costs involved in rolling this out as part of your Cyber Essentials project. As mentioned already, ignorance is not an excuse, given that all questions are freely available for review, so you should ensure that this exercise is carried out before putting any money on the table.

The steps involved as part of the + assessment are, arguably, the best aspects of the scheme, given that you are getting an invaluable external perspective and vulnerability assessment at a rather healthy price point. Based on what I have witnessed, though, it would be good if this side of things was a little more in-depth, with additional auditing of answers from the Basic assessment, as I do feel that the scheme could be open to abuse as a consequence.

A Few Additional Pointers

  • The questions on the Basic self-assessment will generally be structured so that you can make a reasonable guess as to what the “right” answer should be. It is essential that the answers you give are reflective of current circumstances, especially if you wish to go for the + accreditation. If you find yourself lacking in specific areas, then go away and implement the necessary changes before submitting a completed self-assessment.
  • Regular patching cycles are a key theme that crop up throughout Cyber Essentials, so as a minimum step, I would highly recommend that you implement the required processes to address this in advance of any + assessment. It will save you some running around as a consequence.
  • Both assessments are also testing to ensure that you have a sufficiently robust Antivirus solution in place, particularly one that is automated to push out definition updates and – ideally – client updates when required. You should speak to your AV vendor before carrying out any Cyber Essentials assessment to verify that it supports this functionality, as it does help significantly in completing both the Basic and + assessment.
  • An obligatory Microsoft plug here, but a lot of what is available on Office 365 can add significant value when looking at Cyber Essentials:
    • Multi-Factor Authentication, as already discussed, will be needed for your user accounts.
    • Exchange Advanced Threat Protection is particularly useful during the + assessment in providing validation that your organisation protects against malicious file attachments.
    • Last but not least, a Microsoft 365 subscription facilitates a range of benefits, including, but not limited, the latest available version of a Windows operating system, BitLocker drive encryption and policy management features.

If you are currently looking for assistance adopting the scheme, then please feel free to contact me, and I would be happy to discuss how to assist you towards attaining the standard.