I don’t typically stray too far from Microsoft technology areas as part of this blog, but having experienced this particular issue at the coalface and, being acutely aware of the popularity of the WordPress platform for many bloggers, I thought I’d do a specific post to help spread awareness. For those who are in a hurry…

TL;DR VERSION: IF YOU ARE USING THE WP GDPR COMPLIANCE PLUGIN ON YOUR WORDPRESS WEBSITE, UPDATE IT ASAP AND CHECK YOUR WORDPRESS INSTANCE FOR ANY NEW/SUSPICIOUS USER ACCOUNTS; IF EXISTING, THEN YOUR WEBSITE HAS BEEN HACKED. IMMEDIATELY TAKE YOUR SITE OFFLINE AND RESTORE FROM BACKUP, REMOVING AND THEN REINSTALLING THE ABOVE PLUGIN MANUALLY.

When it comes to using WordPress as your blogging platform of choice, the journey from conception to fully working blog can be relatively smooth. The ease of this journey is due, in no small part, to the vast variety of custom extensions – Plugins – available to end-users. These can help to overcome common requirements, such as adding Header/Footer scripts to your website, integrating your website with tools such as Google reCAPTCHA and even to allow you to transform WordPress into a fully-featured e-commerce site. The high degree of personal autonomy this can place in your hands when building out your web presence is truly staggering, and there is no fault on the part of the WordPress project for its regular performance, feature and security release cycles. All of this has meant that the product has grown in popularity and adoption over time.

Regrettably, the applications greatest strength is also its critical weakness point. WordPress is by far the most highly targeted application on the web today by hackers or malicious users. The latest CVE database result for the Content Management System (CMS) proves this point rather definitively but does not explain one of the most common reasons why WordPress is such a major target – namely, that most WordPress deployments are not subject to regular patching cycles. Plugins are by and large more susceptible to this, and any organisation which does not implement a monthly patching cycle for their WordPress site is significantly heightening their risk of being attacked. Even with all of this in place, you are not immune, as what follows demonstrates rather clearly:

On the 6th of November, a plugin designed to assist administrators in meeting their requirements under GDPR vanished from the WordPress Plugin store due to a “security flaw”. The developers deserve full credit and recognition here – within a small space of time, they had released a new version of the plugin with the flaw addressed – but hackers were quick on the ball with this particular vulnerability. On the afternoon of Thursday 8th November, I was alerted to the following actions which were carried out on numerous WordPress websites that I have responsibility for looking after:

  • The WordPress site setting Anyone can register setting was forcibly enabled, having been disabled previously.
  • Administrator became the default role for all new user accounts, having been set to Subscriber previously.
  • Next, a new user account – with a name matching or similar to “trollherten” – was created, containing full administrator privileges. Depending on your WordPress site configuration, an email was then sent to an email address, exposing the full details of your website URL and giving the attacker the ability to login into your site.

From this point forward, the attacker has the keys to the kingdom and can do anything they want on your WordPress website – including, but not limited to, blocking access for other users, installing malicious codes/themes or accessing/downloading the entire contents of the site. The success of the attack lies in its rapid targeting, given the very brief window between the disclosure of the plugin flaw and the timing of the attack, and the relative straightforwardness of automating all of the required steps outlined above. For those who are interested in finding out more about the technical details of the attack, then WordFence has published a great article that goes into further detail on this subject.

So what should I do if my WordPress site is using this plugin or there is evidence of a hacking attempt?

Here is my suggested action list, in priority order, for immediate action:

  • Take your website offline, either by switching off your web server or, if you are using Azure App Service, you have some nifty options at your disposal to restrict access to your site to trusted hosts only.
  • Restore your website from the last, good backup.
  • Update the WP GDPR Compliance plugin to the latest version.
  • As a precaution, change the credentials for all of the following on the website:
    • User Accounts
    • Web Server FTP
    • Any linked/related service to your site that stores privileged information, such as a password, authorisation key etc.
  • Review the following points and put in the appropriate controls, where necessary, to mitigate the risk of a future attack:
    • Patching Cycle for WordPress, Plugins & Themes: You should ideally be carrying out regular patching of all of these components, at least once per month. There are also plugins available that can email you when a new update is available which, in this particular scenario, would have helped to more speedily identify the faulty plugin.
    • Document your Plugins/Themes: You should have a full list of all plugins deployed on your WordPress website(s) stored somewhere, which then forms the basis of regular reviews. Any plugin that has a published vulnerability that has not been addressed by the developer should be removed from your website immediately.
    • Restrict access to the WordPress Admin Centre: .htaccess rules for Apache or web.config changes for IIS can restrict specific URLs on a site to an approved list of hosts. This way, you can ensure that even if an exploit like the one described in this post takes place, the attacker will be restricted when trying to login into your WordPress backend.
    • Review Backup Schedule: Typically, I’ve found that incidents like this can immediately demonstrate flaws in any backup procedure that is in place for a website – either in not being regular enough or, in the worst case, not taking place at all. You should ideally be performing daily backups of your WordPress website(s). Again, Azure makes this particularly easy to implement, but you can also take advantage of services such as VaultPress, which take all the hassle out of this for a small monthly price.

Conclusions or Wot I Think

Attacks of the nature described in this post are an unfortunate byproduct of the internet age and, regrettably, some of the evidence relating to this particular attack does, unfortunately, show that individuals and small businesses are the unfortunate casualties in today’s virtual conflicts on the world stage. Constant vigilance is the only best defence that you can have, more so given the speedy exploitation of this particular flaw. And, there has to be a frank admission that attacks like this are not 100% preventable; all necessary attention, therefore, should be drawn towards risk reduction, with the ultimate aim being to put in place as many steps possible to avoid an obvious target from being painted on your back. I hope that this post has been useful in making you are aware of this issue (if you weren’t already) and in offering some practical tips on how to resolve.

The life of a Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE) professional is one of continual learning across many different technology areas within the core “stack” of the Business Applications platform. Microsoft has clarified this in no uncertain terms recently via the launch of the Power Platform offering, making it clear that cross-skilling across the various services associated with the Common Data Service is no longer an optional requirement, should you wish to build out a comprehensive business solution. I would not be surprised in the slightest if we find ourselves in a situation where the standard SSRS, Chart and Dashboarding options available within CRM/D365CE become deprecated soon, and Power BI becomes the preferred option for any reporting requirements involving the application. With this in mind, knowledge of Power BI becomes a critical requirement when developing and managing these applications, even more so when you consider how it is undoubtedly a core part of Microsoft’s product lineup; epitomised most clearly by the release of the Microsoft Certified Solutions Architect certification in BI Reporting earlier this year.

I have been doing a lot of hands-on and strategic work with Power BI this past year, a product for which I have a lot of affection and which has numerous business uses. As a consequence, I am in the process of going through the motions to attain the BI Reporting MCSA, having recently passed Exam 70-779: Analyzing and Visualizing Data with Microsoft Excel. As part of this week’s post, I wanted to share some general, non-NDA breaching advice for those who are contemplating going for the exam. I hope you find it useful 🙂

Power BI Experience is Relevant

For an exam focused purely on the Excel sides of things, there are a lot of areas tested that have a significant amount of crossover with Power BI, such as:

  • Connecting to data sources via Power Query in Excel, an experience which is an almost carbon copy of working with Power Query within Power BI.
  • Although working with the Excel Data Model, for me at least, represented a significant learning curve when revising, it does have a lot of cross-functionality with Power BI, specifically when it comes to how DAX fits into the whole equation.
  • Power BI is even a tested component for this exam. You should, therefore, expect to know how to upload Excel workbooks/Data Models into Power BI and be thoroughly familiar with the Power BI Publisher for Excel.

Any previous knowledge around working with Power BI is going to give you a jet boost when it comes to tackling this exam, but do not take this for granted. There are some significant differences between both sets of products (epitomised by the fact that Excel and Power BI, in theory, address two distinctly different business requirements), and this is something that you will need to understand and potentially identify during the exam. But specific, detailed knowledge of some of the inner workings of Power BI is not going to be a disadvantage to you.

Learn *a lot* of DAX

DAX, or Data Analysis Expressions, are so important for this exam, and also for 70-778 as well. While it will not necessarily be required for you to memorise every single DAX expression available to pass the exam (although you are welcome to try!), you should be in a position to recognise the structure of the more common DAX functions available. You ideal DAX study areas before the exam may include, but is not limited to:

A focus, in particular, should be driven towards the syntax of these functions, to the extent that you can memorise example usage scenarios involving them.

Get the exam book

As with all exams, Microsoft has released an accompanying book that is a handy revision guide and reference point for self-study. On balance, I feel this is one of the better exam reference books that I have come across, but beware of the occasional errata and, given the frequency of changes these days thanks to the regular Office 365 release cycle, be sure to supplement your learning with any proper online cross-checking.

Setup a dedicated lab environment

This task can be accomplished alongside working through the exercises in the exam book referenced above but, as with any exam, hands-on experience using the product is the best way of getting a passing grade. Download a copy of SQL Server Developer edition, restore one of the sample databases made available by Microsoft, get a copy of Excel 2016 and – hey presto! – you now have a working lab environment & dataset that you can interact with to your heart’s content.

Pivot yourself toward greater Excel knowledge

Almost a quarter of the exam tests candidates on the broad range of PivotTable/PivotChart visualisations made available within Excel. With this in mind, some very detailed, specific knowledge is required in each of the following areas to stand a good chance of a passing grade:

  • PivotTables: How they are structured, how to modify the displaying of Totals/Subtotals, changing their layout configuration, filtering (Slicers, Timelines etc.), auto-refresh options, aggregations/summarising data and the difference between Implicit and Explicit Measures.
  • PivotCharts: What chart types are available in previous and newer versions of Excel (and which aren’t), understanding the ideal usage scenario for each chart type, understanding the different variants available for each chart types, understanding the structure of a chart (Legend, Axis etc.), chart filtering and formatting options available for each chart.

Check out the relevant edX course

As a revision tool, I found the following edX course of great assistance and free of charge to work through:

Analyzing and Visualizing Data with Excel

The course syllable mirrors itself firmly to the skills measured for the exam and represents a useful visual tool for self-study or as a means of quickly filling any knowledge gaps.

Conclusions or Wot I Think

It is always essential, within the IT space, to keep one eye over the garden fence to see what is happening in related technology areas. This simple action of “keeping up with the Joneses” is to ensure no surprises down the road and to ensure that you can keep your skills relevant for the here and now. In case you didn’t realise already, Power BI is very much one of those things that traditional reporting analysts and CRM/D365CE professionals should be contemplating working with, now or in the immediate future. As well as being a dream to work with, it affords you the best opportunity to implement a reporting solution that will both excite and innovate end users. For me, it has allowed me to develop client relationships further once putting the solution in place, as users increasingly ask us to bring in other data sources into the solution. Whereas typically, this may have resulted in a protracted and costly development cycle to implement, Power BI takes all the hassle out of this and lets us instead focus on creating the most engaging range of visualisations possible for the data in question. I would strongly urge any CRM/D365CE professional to start learning about Power BI when they can and, as the next logical step, look to go for the BI Reporting MCSA.

The whole concept of audio conferencing – the ability for a diverse range of participants to dial into a central location for a meeting – is hardly a new concept for the 21st century. Its prevalence, however, has undoubtedly grown sharply in the last 15-20 years; to the point now where, to use an analogy, it very much feels like a DVD when compared to full video conferencing, à la Blu-Ray. When you also consider the widening proliferation of remote workers, globalisation and the meteoric rise of cloud computing, businesses suddenly find themselves having to find answers to the following questions:

  • How can I enable my colleagues to straightforwardly and dynamically collaborate across vast distances?
  • How do I address the challenges of implementing a scalable solution that meets any regulatory or contractual requirements for my organisation?
  • What is the most cost-effective route to ensuring I can have a genuinely international audio-conferencing experience?
  • How do I identify a solution that users can easily adopt, without introducing complex training requirements?

These questions are just a flavour of the sorts of things that organisations should be thinking about when identifying a suitable audio conferencing solution. And there are a lot of great products on the market that address these needs – GoToMeeting or join.me represent natural choices for specific scenarios. But to provide a genuinely unified experience for existing IT deployments that have a reliance on Skype for Business/Microsoft Teams, the audio conferencing add-on for Office 365 (also referred to as Skype for Business Online Audio Conferencing) may represent a more prudent choice. It ticks all of the boxes for the questions above, ensuring that users can continue utilising other tools they know and use every day – such as Microsoft Outlook and Office 365. Admittedly, though, the solution is undoubtedly more geared up for Enterprise deployments as opposed to utilisation by SMBs. It may, therefore, become too unwieldy a solution in the hands of smaller organisations.

I was recently involved in implementing an audio conferencing solution for Skype for Business Online, to satisfy the requirement for international dialling alluded to earlier. Having attended many audio conferences that utilise the service previously, I was familiar with the following experience when dialling in and – perhaps naively – assumed this to be the default configuration:

  1. User dials in and enters the meeting ID number.
  2. An audio cue is played for the conference leader, asking them to enter their leader code (this is optional).
  3. The user is asked to record their name and then press *
  4. The meeting then starts automatically.

For most internal audio conferences (and even external ones), this process works well, mainly when, for example, the person organising the meeting is doing so on behalf of someone else, and is unlikely to be dialling in themselves. However, I was surprised to learn that the actual, default experience is a little different:

  1. User dials in and enters the meeting ID number.
  2. An audio cue is played for the conference leader, asking them to enter their leader code (this is optional).
  3. The user is asked to record their name and then press *
  4. If the leader has not yet dialled in, all other attendees sit in the lobby. The call does not start until the leader joins.

The issue does not relate to how the meeting organiser has configured their meeting in Outlook, regardless of which setting is chosen in Outlook in the These people don’t have to wait in the lobby drop-down box.

After some fruitless searching online, I eventually came across the following article, which clarified things for me:

Start an Audio Conference over the phone without a PIN in Skype for Business Online

As a tenant-level configuration, therefore, there is a two-stage process involved to get this reconfigured for existing Skype for Business Online Audio Conferencing deployments:

  • Set the AllowPSTNOnlyMeetingsByDefault setting to true on the tenant via a PowerShell cmdlet.
  • Configure the AllowPSTNONLYMeetingsByDefault setting to true for every user setup for Audio Conferencing, either within the Skype for Business Administration Centre or via PowerShell.

The second process could be incredibly long-winded to achieve via the Administration Centre route, as you have to go into each user’s audio conferencing settings and toggle the appropriate control, as indicated below:

For a significantly larger deployment, this could easily result in carpal tunnel syndrome and loss of sanity 😧. Fortunately, PowerShell can take away some of the woes involved as part of this. By logging into the Skype for Business Online administration centre via this route, it is possible to both enable the AllowPSTNOnlyMeetingsByDefault setting on a tenant level and also for all users who currently have the setting disabled. The complete script to carry out these steps is below:

#Standard login for S4B Online

Import-Module SkypeOnlineConnector
$userCredential = Get-Credential
$sfbSession = New-CsOnlineSession -Credential $userCredential
Import-PSSession $sfbSession

#Login script for MFA enabled accounts

Import-Module SkypeOnlineConnector
$sfbSession = New-CsOnlineSession
Import-PSSession $sfbSession

#Enable dial in without leader at the tenant level - will only apply to new users moving forward

Set-CsOnlineDialInConferencingTenantSettings -AllowPSTNOnlyMeetingsByDefault $true

#Get all current PSTN users

$userIDs = Get-CsOnlineDialInConferencingUserInfo | Where Provider -eq "Microsoft" | Select DisplayName, Identity

$userIDs | ForEach-Object -Process {
        
    #Then, check whether the AllowPstnOnlyMeetings is false

    $identity = $_.Identity

    $user = Get-CsOnlineDialInConferencingUser -Identity $identity.RawIdentity
    Write-Host $_.DisplayName"AllowPstnOnlyMeetings value equals"$user.AllowPstnOnlyMeetings
    if ($user.AllowPstnOnlyMeetings -eq $false) {
        
        #If so, then enable

        Set-CsOnlineDialInConferencingUser -Identity $identity.RawIdentity -AllowPSTNOnlyMeetings $true
        Write-Host $_.DisplayName"AllowPstnOnlyMeetings value changed to true!"
    }

    else {
        Write-Host "No action required for user account"$_.DisplayName 
    }
}

Some notes/comments before you execute it in your environment:

  • You should comment out the appropriate authentication snippet that is not appropriate for your situation, depending on whether you have enabled Multi-Factor Authentication for your user account.
  • Somewhat annoyingly, there is no way (I found) to extract a unique enough identifier that can be used with the Set-CsOnlineDialInConferencingUser cmdlet when obtaining the details of the user via the Get-CsOnlineDialInConferencingUser cmdlet. This is why the script first retrieves the complete LDAP string using the Get-CsOnlineDialInConferencingUserInfo. Convoluted I know, but it ensures that the script can work correctly and avoids any issues that may arise from duplicate Display Names on the Office 365 tenant.

All being well, with very little modification, the above code can be utilised to enforce a setting across the board or for a specific subset of users, if required. It does seem strange that the option is turned off by default, but there are understandable reasons why it may be desirable to curate the whole meeting experience for attendees. If you are considering rolling out Skype for Business Online Audio Conferencing in your Office 365 tenant in the near future, then this represents one of many considerations that you will have to take when it comes to the deployment. You should, therefore, think carefully and consult with your end-users to determine what their preferred setting is; you can then choose to enable/disable the AllowPSTNOnlyMeetingsByDefault setting accordingly.

Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂

Automated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:

  • Removes human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes.
  • Allows for clearly defined, auditable approval cycles for release approvals.
  • Provides developers with the flexibility for structuring deployments based on any predefined number of release environments.

You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:

This template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot – with no coding required at any stage:

There may also be some additional options that need configuring as part of the Azure App Service Deploy task:

  • Publish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API.
  • Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project.
  • Exclude files from the App_Data folder: Depending on what your web application is doing, it’s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this.
  • Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages.

This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:

 

Note also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.

To get around this problem, we must address the issue that the Error Code points us toward – namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:

  • Create a staging slot on Azure for your App Service, with a name of your choosing
  • Structure a deployment task that carries out the following steps, in order:
    • Stop the staging slot on your chosen App Service instance.
    • Deploy your web application to the staging slot.
    • Start the staging slot on your chosen App Service instance.
    • (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.)
    • Perform a Swap Slot, promoting the staging slot to your live, production instance.

The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:

Even with this in place, I would still recommend that you look at taking your web application “offline” in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.

Setting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement – merely some drag & drop wizardry and some fiddling about with deployment task settings. I don’t know about you, but I am pretty happy with that state of affairs. 🙂

The importance of segregated deployment environments for any IT application cannot be understated. Even if this only comprises of a single test/User Acceptance Testing (UAT) environment, there are a plethora of benefits involved, which should easily outweigh any administrative or time effort involved:

  • They provide a safe “sandbox” for any functionality or developmental changes to be carried in sequence and verify the intended outcome.
  • They enable administrators to carry out the above within regular business hours, without risk of disruption to live data or processes.
  • For systems that experience frequent updates/upgrades, these types of environments allow for major releases to be tested in advance of a production environment upgrade.

When it comes Dynamics 365 Customer Engagement (D365CE), Microsoft more recently have realised the importance that dedicated test environments have for their online customers. That’s why, with the changes announced during the transition away from Dynamics CRM, a free Sandbox instance was granted with every subscription. The cost involved in provisioning these can start to add up significantly over time, so this change was and remains most welcome; mainly when it means that any excuse to carrying out the bullet point highlighted above in bold immediately gets thrown off the table.

Anyone who is presently working in the D365CE space will be intently aware of the impending forced upgrade to version 9 of the application that must take place within the next few months. Although this will doubtless cause problems for some organisations, it is understandable why Microsoft is enforcing this so stringently and – if managed correctly – allows for a more seamless update experience in the future, getting new features into the hands of customers much faster. This change can only be a good thing, as I have argued previously on the blog. As a consequence, many businesses may currently have a version 9 Sandbox instance within their Office 365 tenant which they are using to carry out the types of tests I have referenced already, which typically involves testing custom developed or ISV Managed/Unmanaged Solutions. One immediate issue you may find if you are working with solutions containing the Marketing List (list) entity is that your Solution suddenly stops importing successfully, with the following error messages:

The error message on the solution import is a little vague…

…and the brain starts hurting when we look at the underlying error text!

The problem relates to changes to the Marketing List entity in version 9 of the application, specifically the Entity property IsVisibleInMobileClient. This assertion is confirmed by comparing the Value and CanBeChanged properties of these settings in both an 8.2 and 9.0 instance, using the Metadata Browser tool included in the SDK:

Here’s the setting in Version 8.2…

…and how it appears now in Version 9.0

Microsoft has published a support article which lists the steps involved to get this resolved for every new solution file that you find yourself having to ship between 8.2 and 9.0 environments. Be careful when following the suggested workaround steps, as modifications to the Solution file should always be considered a last resort means of fixing issues and can end up causing you no end of hassle if applied incorrectly. There is also no way of fixing this issue at source as well, all thanks to the CanBeChanged property being set to false (this may be a possible route of resolution if you are having this same problem with another Entity that has this value set to true, such as a custom entity). Although Microsoft promises a fix for this issue, I wouldn’t necessarily expect that 8.2 environments will be specially patched to resolve this particular problem. Instead, I would take the fix to mean the forced upgrade of all 8.2 environments to version 9.0/9.1 within the next few months. Indeed, this would seem to be the most business-sensible decision rather than prioritising a specific fix for an issue that is only going to affect a small number of deployments.