A vital part of any DevOps automation activity is to facilitate automatic builds of code projects on regular cycles. In larger teams, this becomes particularly desirable for a multitude of reasons:

  • Provides a means of ensuring that builds do not contain any glaring code errors that prevent a successful compile from taking place.
  • Enables builds to be triggered in a central, “master” location, that all developers are regularly shipping code to.
  • When incorporated as part of other steps during the build stage, other automation tasks can be bolted on straightforwardly – such as the running of Unit Tests and deployment of resources to development environment(s).

The great news is that, when working with either Visual Studio Team Services or Team Foundation Server (VSTS/TFS), the process of setting up the first automated build definition of your project is straightforward. All steps can be completed directly within the GUI interface and – although some of the detailed configuration settings can be a little daunting when reviewing them for the first time – the drag and drop interface means that you can quickly build consistent definitions that are easy to understand at a glance.

One such detailed configuration setting relates to your choice of Build and Release Agent. To provide the ability to carry out automated builds (and releases), VSTS/TFS requires a dedicated agent machine designated that can be used to execute all required tasks on. There are two flavours of Build and Release Agents:

  • Microsoft Hosted: Fairly self-explanatory, this is the most logical choice if your requirements are fairly standard – for example, a simple build/release definition for an MVC ASP.NET application. Microsoft provides a range of different Build and Release Agents, covering different operating system vendors and versions of Visual Studio.
  • Self-Hosted: In some cases, you may require access to highly bespoke modules or third-party applications to ensure that your build/release definitions complete successfully. A good example may be a non-Microsoft PowerShell cmdlet library. Or, it could be that you have strict business requirements around the storage of deployment resources. This is where Self-Hosted agents come into play. By installing the latest version of the Agent onto a computer of your choice – Windows, macOS or Linux – you can then use this machine as part of your builds/releases within both VSTS & TFS. You can also take this further by setting up as many different Agent machines as you like and then group these into a “pool”, thereby allowing concurrent jobs and enhanced scalability.

The necessary trade-off when using Self-Hosted agents is that you must manage and maintain the machine yourself – for example, you will need to install a valid version of Visual Studio and SQL Server Data Tools if you wish to build SQL Server Database projects. What’s more, if issues start to occur, you are on your own (to a large extent) when diagnosing and resolving them. One such problem you may find is with permissions on the build agent, with variants of the following error that may crop up from time to time during your builds:

The error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the VSTS directory. When VSTS / TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed. I have observed that, even if the execution account on the Build Agent machine has sufficient privileges to overwrite files in the directory, you will still run into this issue. The best resolution I have found – in all cases to date – is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\VSTS\_work\SourceRootMapping\5dc5253c-785c-4de1-b722-e936d359879c\13\SourceFolder.json) and delete the file/folder in question. Removing the offending items will effectively “clean slate” the next Build definition execution, which should then complete without issue.

We are regularly told these days of the numerous benefits that “going serverless” can bring to the table, including, but not limited to, reduced management overhead, reduced cost of ownership and faster adoption of newer technology. The great challenge with all of this is that, because no two businesses are typically the same, there is often a requirement to boot up a Virtual Machine and run a specified app within a full server environment, so that we can achieve the level of required functionality to suit our business scenario. Self-Hosted agents are an excellent example of this concept in practice, and one that is hard to prevent from being regularly utilised, irrespective of how vexatious this may make us. While the ability to use Microsoft Hosted Build and Release Agents is greatly welcome (especially given there is no cost involved), it would be nice to see if this could be “opened up” to allow additional Agent machine tailoring for specific situations. I’m not going to hold my breath in this regard though – if I were in Microsoft’s shoes, I would shudder at the thought of allowing complete strangers the ability to deploy and run custom libraries on my critical LOB application. It’s probably asking for more trouble than the convenience it would provide 🙂

Cybersecurity should be an ongoing concern for any organisation, regardless of its size and complexity. This is chiefly for two essential business reasons:

  1. A cybersecurity incident or breach could, depending on its severity, result in significant reputational or financial damage if not adequately safeguarded against or handled correctly.
  2. When judging whether to award a contract to a business for a critical function, the awarding organisation will typically need to assuage themselves of any risk associated with placing this activity “outside the garden fence”. Cybersecurity is one aspect of assessing this risk, usually focused towards understanding what controls, policies and procedures exist within a business to ensure that sensitive data is handled appropriately.

Traditionally, to adequately demonstrate sufficient competence in this area, the ISO 27001 standard acts as a watermark to indicate that proper information security management systems are in place within a business. Many routes are currently available towards achieving this accreditation. Its adoption can involve many complicated and highly integrated business changes which, for smaller organisations, may prove to be a significant challenge to put in place – laying aside any cost implications.

In recognition of this fact and as a general acknowledgement towards the increased risk the “internet age” brings to supplier/customer relationships (particularly in the public sector), the UK Government launched the Cyber Essentials scheme back in June 2014. Aimed at organisations of any size, it promises to provide the necessary help and reassurance that your business/organisation has put the necessary steps in place to ‘…protect…against common online threats’, affording the opportunity to advertise this fact to all and sundry.

I’ve been through the process of successfully attaining the standard within organisations over the past few years, so I wanted to share some of my thoughts relating to the scheme, alongside some tips to help you along the way if you are contemplating adopting the scheme in the near future.

To begin with, I wanted to provide a detailed overview of the scheme, with some reasons why it may be something your organisation should consider.

Cyber Essentials is structured as a tiered scheme, with two certification levels available, which differ significantly in their level of rigorousness:

  • Cyber Essentials: Sometimes referred to as “Cyber Essentials Basic“, this level of the standard is designed to assess your current IT infrastructure and internal processes, via a self-assessment questionnaire. The answers are then reviewed and marked against the standard.
  • Cyber Essentials +: Using the answers provided during the Basic accreditation process, a more thorough assessment is carried out on your network by an external organisation, taking the form of a mini-penetration test of your infrastructure.

You can read in further detail on the scheme’s website regarding each level. It should be noted, even if it may go without saying, that you must be Cyber Essentials Basic accredited before you can apply for the + accreditation. Both tiers of the standard also require renewal annually.

Whether your organisation needs the scheme or not depends on your industry focus and, in particular, your appetite for working within the public sector. As noted on the GOV.UK website:

From 1 October 2014, Government requires all suppliers bidding for contracts involving the handling of certain sensitive and personal information to be certified against the Cyber Essentials scheme.

Its requirement has also spread itself further from there into some areas of the private sector. For example, I have seen tenders/contracts in recent times explicitly asking for Cyber Essentials + as a minimum requirement for any suppliers. In short, you should be giving some thought towards the scheme if you do not have anything existing in place and if you have a desire to complete public sector work in the very near future.

What You Can Expect

The exact process will differ depending on which accreditation body you work with, but the outline process remains the same for both levels of the scheme:

  • For the Basic, you will be asked to complete and return answers to the self-assessment question list. Responses will then be scored based on a Red, Amber, Green (RAG) scoring system, with full justifications for each score provided. Depending on the number and severity of issues involved, an opportunity to implement any required changes and resubmit your answers may be given at no additional cost; otherwise, failure will mean that you will have to apply to complete the questionnaire again for an additional fee. Turnaround for completed responses has been relatively quick in my experience, with the upshot being that you could potentially get the accreditation in place within a few weeks or less. For those who may be worried about the contents of the questionnaire, the good news is that you can download a sample question list at any time to evaluate your organisation’s readiness.
  • As hinted towards already, the + scheme is a lot more involved – and costly – to implement. You will be required to allow an information security consultant access to a representative sample of your IT network (including servers and PC/Mac endpoints), for both internal and external testing. The consultant will need to be given access to your premises to carry out this work, using a vulnerability assessment tool of their choosing. There will also be a requirement to evidence any system or process that you have attested towards as part of the Basic assessment (e.g. if you are using Microsoft Intune for Mobile Device Management, you may be required to export a report listing all supervised devices and demonstrate a currently supervised device). It is almost a certainty that there will be some remedial work that needs to take place resulting from any scan, most likely amounting to the installation of any missing security updates. Previously, you were granted a “reasonable” period to complete these actions; for 2018, the scheme now requires that all corrective actions are completed within 30 days of the on-site assessment taking place. Once this is done and evidenced accordingly, a final report will be sent, noting any additional observations, alongside confirmation of successfully attaining the + accreditation.

Costs will vary, but if you are paying any more than £300 for the Basic or £1,500 + VAT for the + accreditation, then I would suggest you shop around. 🙂

Is it worth it?

As there is a cost associated towards all of this, there will need to a reasonable business justification to warrant this spend. The simple fact that you may now be required to contract with organisations who mandate this standard being in place is all the justification you may need, especially if the contract is of sufficiently high value. Or it could be that you wish to start working within the public sector. In both scenarios, the adoption of the standard seems like a no-brainer option if you can anticipate any work to be worth in excess of £2,000 each year.

Beyond this, when judging the value of something, it is often best to consider the impact or positive change that it can bring to the table. Indeed, in my experience, I have been able to drive forward significant IT infrastructure investments off the back of adopting the scheme. Which is great…but not so much from a cost standpoint. You, therefore, need to think carefully, based on what the standard is looking for, on any additional investment required to ensure compliance towards it. For example, if your organisation currently does not have Multi-Factor Authentication in place for all users, you will need to look at the license and time costs involved in rolling this out as part of your Cyber Essentials project. As mentioned already, ignorance is not an excuse, given that all questions are freely available for review, so you should ensure that this exercise is carried out before putting any money on the table.

The steps involved as part of the + assessment are, arguably, the best aspects of the scheme, given that you are getting an invaluable external perspective and vulnerability assessment at a rather healthy price point. Based on what I have witnessed, though, it would be good if this side of things was a little more in-depth, with additional auditing of answers from the Basic assessment, as I do feel that the scheme could be open to abuse as a consequence.

A Few Additional Pointers

  • The questions on the Basic self-assessment will generally be structured so that you can make a reasonable guess as to what the “right” answer should be. It is essential that the answers you give are reflective of current circumstances, especially if you wish to go for the + accreditation. If you find yourself lacking in specific areas, then go away and implement the necessary changes before submitting a completed self-assessment.
  • Regular patching cycles are a key theme that crop up throughout Cyber Essentials, so as a minimum step, I would highly recommend that you implement the required processes to address this in advance of any + assessment. It will save you some running around as a consequence.
  • Both assessments are also testing to ensure that you have a sufficiently robust Antivirus solution in place, particularly one that is automated to push out definition updates and – ideally – client updates when required. You should speak to your AV vendor before carrying out any Cyber Essentials assessment to verify that it supports this functionality, as it does help significantly in completing both the Basic and + assessment.
  • An obligatory Microsoft plug here, but a lot of what is available on Office 365 can add significant value when looking at Cyber Essentials:
    • Multi-Factor Authentication, as already discussed, will be needed for your user accounts.
    • Exchange Advanced Threat Protection is particularly useful during the + assessment in providing validation that your organisation protects against malicious file attachments.
    • Last but not least, a Microsoft 365 subscription facilitates a range of benefits, including, but not limited, the latest available version of a Windows operating system, BitLocker drive encryption and policy management features.

If you are currently looking for assistance adopting the scheme, then please feel free to contact me, and I would be happy to discuss how to assist you towards attaining the standard.

UPDATE 02/09/2018: It turns out that there is a far better way of fixing this problem. Please click here to find out more.

I thought I was losing my mind the other day. This feeling can be a general occurrence in the world of IT, when something completely random and unexplainable happens – emphasised even more so when you have a vivid recollection of something behaving in a particular way. In this specific case, a colleague was asking why they could no longer access the list of Workflows setup within a version 8.2 Dynamics 365 Customer Engagement (D365CE) Online instance via the Settings area of the system. Longstanding CRM or D365CE professionals will recall that this has been a mainstay of the application since Dynamics CRM 2015, accessible via the Settings -> Processes group Sitemap area:

Suffice to say, when I logged on to the affected instance, I was thoroughly stumped, as this area had indeed vanished entirely:

I asked around the relatively small pool of colleagues who a) had access to this instance and b) had knowledge of modifying the sitemap area (more on this shortly). The short answer, I discovered, was that no one had any clue as to why this area had suddenly vanished. It was then that I came upon the following Dynamics 365 Community forum post, which seemed to confirm my initial suspicions; namely, that something must have happened behind the scenes with Microsoft or as part of an update that removed the Processes area from the SiteMap. Based on the timings of the posts, this would appear to be a relatively recent phenomenon and one that can be straightforwardly fixed…if you know how to. 😉

For those who are unfamiliar with how SiteMaps work within the application, these are effectively XML files that sit behind the scenes, defining how the navigation components in CRM/ D365CE operate. They tell the application which of the various Entities, Settings, Dashboards and other custom solution elements that need to be displayed to end users. The great thing is that this XML can be readily exported from the application and modified to suit a wide range of business scenarios, such as:

  • Only make a specific SiteMap area available to users who are part of the Sales Manager Security Role.
  • Override the default label for the Leads SiteMap area to read Sales Prospect instead.
  • Link to external applications, websites or custom developed Web Resources.

What this all means is that there is a way to fix the issue described earlier in the post and, even better, the steps involved are very straightforward. This is all helped by quite possibly the best application that all D365CE professionals should have within their arsenal – the XrmToolBox. With the help of a specific component that this solution provides, alongside a reliable text editor program, the potentially laborious process of fiddling around with XML files and the whole export/import process can become streamlined so that anybody can achieve wizard-like ability in tailoring the applications SiteMap. With all this in mind, let’s take a look on how to fix the above issue, step by step:

  1. Download and run XrmToolbox and select the SiteMap Editor app, logging into your CRM/D365CE instance when prompted:

After logging in, you should be greeted with a screen similar to the below:

  1. Click on the Load SiteMap button to load the SiteMap definition for the instance you are connected to. Once loaded, click on the Save SiteMap button, saving the file with an appropriate name on an accessible location on your local computer.
  2. Open the file using your chosen text editor, applying any relevant formatting settings to assist you in the steps that follow. Use the Find function (CTRL + F) to find the Group with the node value of Customizations. It should look similar to the image below, with the Group System_Setting specified as the next one after it:

  1. Copy and paste the following text just after the </Group> node (i.e. Line 415):
<Group Id="ProcessCenter" IsProfile="false">
    <Titles>
        <Title LCID="1033" Title="Processes" />
    </Titles>
    <SubArea Entity="workflow" GetStartedPanePath="Workflows_Web_User_Visor.html" GetStartedPanePathAdmin="Workflows_Web_Admin_Visor.html" GetStartedPanePathAdminOutlook="Workflows_Outlook_Admin_Visor.html" GetStartedPanePathOutlook="Workflows_Outlook_User_Visor.html" Id="nav_workflow" AvailableOffline="false" PassParams="false">
        <Titles>
          <Title LCID="1033" Title="Workflows" />
        </Titles>
    </SubArea>
</Group>

It should resemble the below if done correctly:

  1. Save a copy of your updated Sitemap XML file and go back to the XrmToolbox, selecting the Open SiteMap button. This will let you import the modified, copied XML file back into the Toolbox, ready for uploading back onto CRM/D365CE. At this stage, you can verify the SiteMap structure of the node by expanding the appropriate area within the main SiteMap window:

When you are ready, click on the Update SiteMap button and wait until the changes are uploaded/published into the application. You can then log onto CRM/D365CE to verify that the new area has appeared successfully. Remember when I said to save a copy of the SiteMap XML? At this stage, if the application throws an error, then you can follow the steps above to reimport the original SiteMap to how it was before the change, thereby allowing you to diagnose any issues with the XML safely.

It is still a bit of mystery precisely what caused the original SiteMap area for Processes to go walkies. The evidence would suggest that some change by Microsoft forced its removal and that this occurred not necessarily as part of a major version update (the instance in our scenario has not been updated to a major release for 18 months at least, and this area was definitely there at some stage last year). One of the accepted truths with any cloud CRM system is that you at the mercy of the solution vendor, ultimately, if they decide to modify things in the background with little or no notice. The great benefit in respect to this situation is that, when you consider the vast array of customisation and development options afforded to us, CRM/D365CE can be very quickly tweaked to resolve cases like this, and you do not find yourself at the mercy of operating a business system where your bespoke development options are severely curtailed.

SQL Server Integration Services (SSIS) package execution can always throw up a few spanners, particularly when it comes to the task of deploying packages out to a SQL Server SSISDB catalog – principally, a specialised database for the storage of .dtsx packages, execution settings and other handy profile info to assist with automation. Problems can generally start creeping if you decide to utilise non-standard connectors for your package data sources. For example, instead of employing the more oft utilised Flat File Connection Manager for .csv file interaction, there may be a requirement to use the Excel Connection Manager instead. While I would generally favour the latter data Connection Manager where possible, the need to handle .xlsx file inputs (and to output into this file format) comes up more often than you might think. Bearing this in mind, it is, therefore, always necessary to consider the impact that deploying out what I would term a “non-standard Connection Manager” (i.e. a non-Flat File Connection Manager) can have for your package release. Further, you should give some serious thought towards any appropriate steps that may need to be taken within your Production environment to ensure a successful deployment.

With all of this in mind, you may encounter the following error message when deploying out a package that utilises the ADO.NET Connector for MySQL – a convenient driver released by Oracle that lets you connect straightforwardly with MySQL Server instances, à la the default SQL Server ADO.NET connector:

Error: Microsoft.SqlServer.Dts.Runtime.DtsCouldNotCreateManagedConnectionException: Could not create a managed connection manager. at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection

Specifically, this error will appear when first attempting to execute your package within your Production environment. The good news is that the reason for this error – and its resolution – can be easily explained and, with minimal effort, resolved.

The reason why this error may be happening is that the appropriate ADO.NET MySQL driver is missing from your target SSISDB server. There is no mechanism for the proper dependent components to be transported as part of deploying a package to a catalog, meaning that we have to resort to downloading and installing the appropriate driver on the server that is executing the packages to resolve the error. Sometimes, as part of long development cycles, this critical step can be overlooked by the package developer. Or, it could be that a different individual/team that is responsible for managing deployments are not necessarily well-briefed ahead of time on any additional requirements or dependencies needed as part of a release.

For this particular example, getting things resolved is as simple as downloading and installing onto the SSISDB Server the latest version of the MySQL Connector Net drivers that can be found on the link below:

MySQL Connector/NET 8.0

If you find yourself in the same situation not involving the above Data Connector, then your best bet is to interrogate the package in question further and identify the appropriate drivers that are needed.

Now, the key thing to remember about all of this is that the driver version on the client development machine and the SSISDB server needs to be precisely the same. Otherwise, you will more than likely get another error message generated on package execution, resembling this:

Could not load file or assembly ‘MySql.Data, Version=6.10.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d’ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference.

In which case, you will need to resolve the version conflict, ideally by assuring that both machines are running the latest version of the corresponding driver. An uninstall, and server reboot could be necessary at this juncture, so be sure to tread cautiously.

SSIS development can often feel like a protracted, but ultimately worthwhile, process. With this in mind, it is natural to expect some bump in the roads and for potentially essential steps to be overlooked, particularly in larger organisations or for more complex deployments. Putting appropriate thought towards release management notes and even dedicated testing environments for deployments can help to mitigate the problem that this post references, ensuring a smooth pathway towards a prosperous, error-free release 🙂