Earlier this year, the Business Applications team at Microsoft published a blog post titled Modernizing the way we update Dynamics 365, a significant article that anyone involved with Dynamics 365 Customer Engagement (D365CE) should take time to read through carefully. Indeed, as a direct consequence of the announcements contained in this post, you may now be receiving emails similar to the below if you are an administrator of a D365CE instance:

Changes to well-established processes always can produce a mixture of questions, confusion and, in some cases, frustration for IT teams. Once you have fully understood the broader context of where D365CE is going and also the general sea change that has been occurring since Satya Nadella came to the helm of Microsoft, the modifications to the Update Policy are welcome and, arguably, necessary to ensure that D365CE users and administrators can take advantage of the different features available within a D365CE subscription today. For those who are still scratching their head at all of this, what follows is a summary of the most significant changes announced, along with some additional observations from me on why it is important to embrace all these changes wholeheartedly.

Version 9 or Bust

Longstanding D365CE online customers will be used to the regular update cycles and the ability to defer significant application updates for a period. While this can be prudent for more complex deployments, it does potentially lead to additional overhead in the long term, mainly if Microsoft were ever to force this decision upon you. The well-established advice has always been to proactively manage your updates at your own pace, ideally targeting at least one major update a year. If you haven’t been doing this, then you may now be in for a particularly nasty shock. As mentioned in the article:

Since every customer will be updated on the continuous delivery schedule, your organization needs to update to the latest version if you are running an older version of Dynamics 365…For customers who are currently running older versions of Dynamics 365, we will continue to provide you with the ability to schedule an update to the latest version and want to make sure this effort is as seamless as possible through continuous improvements in our update engine…For Dynamics 365 (online) customer engagement applications, we sent update communications in May to all customers running v8.1 and have scheduled updates. Customers running v8.2 should plan to update to the latest version by January 31, 2019.

This point is reinforced in a much more explicit manner in the email above:

ACTION NEEDED: Schedule an update for your organization by August 16, 2018. The date for the update should be on or before January 31, 2019. You can find instructions on how to schedule and approve updates here.

If you do not schedule an update in the timeframe mentioned above, Microsoft will schedule an automatic update for your organization on August 17, 2018 and communicate the dates. The automatic update would take place during your normal maintenance window.

The implications should be clear, and it certainly seems that, in this scenario, Microsoft has decided to eliminate any degree of upgrade flexibility for its customers.

No Changes to Minor/Major Updates?

Again, if you are familiar with how D365CE Online operates, there are two flavours of updates:

  • Minor updates, to address bugs, performance and stability issues, are continually pushed out “behind the scenes”. You have no control over when and how these are applied, but they will always be carried out outside your regions regular business hours. The Office 365 Administrator Portal is your go-to place to view any past or upcoming minor updates.
  • Major updates generally referred to as Spring Wave or Fall Update releases. There has always been two of these each year, and administrators can choose when to apply these to a D365CE instance. These updates can generally take much longer to complete but will introduce significant new features.

Microsoft’s new Update Policy seems to leave this convention intact, with a noteworthy change highlighted below in bold:

We are transforming how we do service updates for Dynamics 365 (online). We will deliver two major releases per year – April and October – offering new capabilities and functionality. These updates will be backward compatible so your apps and customizations will continue to work post update. New features with major, disruptive changes to the user experience are off by default. This means administrators will be able to first test before enabling these features for their organization.

In addition to the two major updates, we will continue to deploy regular performance and reliability improvement updates throughout the year. We are phasing deployments over several weeks following safe deployment practices and monitoring updates closely for any issues.

Some additional detail around this will be welcome to determine its effectiveness, but I can imagine some parity with the Experimental Features area in PowerApps, which – contrary to the above – will often introduce new features that are left on by default. A derived version of this feature would, I think, work in practice and hopefully streamline the process of testing new functionality without necessarily introducing it unintended into Production environments.

On-Premise Implications

One question that all of this may raise is around the on-premise version of the application, in particular for those who consume online subscriptions, but use their dual-usage rights to create an on-premise instance instead. This situation becomes more pressing when you consider the following excerpt from the refreshed Update Policy:

Dynamics 365 (Online) version 8.2 will be fully supported until January 31, 2019. Customers running version 8.2 should plan to update to the latest version prior to this date.

Now, the important thing to stress is the fact that the above quotation makes explicit reference to Online as opposed to on-premise. Also, when we check Microsoft’s product lifecycle page, you can see that Mainstream support for this product ends in January 2021. On-premise administrators can, I would suggest, breath a sigh of relief for now, but I would urge you to contact Microsoft to clarify your support arrangements. I think as an organisation as well, you should also start seriously asking yourself the following questions:

  • Is an online, Software as a Service (SaaS) version of the application going to be easier to maintain compared with dedicated server environment(s)?
  • Is it possible to achieve all of your required functionality and business requirements using the Online version of the application?
  • Do you want to ensure you have the latest features exposed to you and can take advantage of Online-only functionality, such as Export to Excel Online?

If the answer to all of the above questions is “Yes”, then a migration to the Online version of the application would be my recommended course of action, as it wouldn’t surprise me if Microsoft were to stop releasing new versions/service packs for the on-premise version of the product or eliminate it by providing inexpensive sandbox instance options.

Recommended Next Steps

The fundamental aim of this move is a housekeeping exercise for Microsoft. The announcement earlier this year of version 2 of the Common Data Service – which is utilising the existing D365CE SQL database for all customisations – is the key driver behind a lot of the changes that are happing in the CRM/D365CE space today. The focus for the product team at Microsoft currently appears to be towards knitting together both experiences into the PowerApps interface. What this means in practice is that the traditional customisation experience is going to slowly fade away, to be replaced by Model-Driven App development instead. This refresh is excellent for several reasons – it provides a much-needed interface update, while also exposing additional functionality to us when creating business applications – but it is evident that such a massive change will require a consistent playing field for all of Microsoft’s existing version 8.2 and below D365CE customers. Getting everyone onto version 9 of the application is the apparent result towards rolling out version 2 of the Common Data Service for all existing customers while ensuring that D365CE can fit into the mould of other application release cycles across Microsoft today. Embracing the change should not be a difficult thing to do and, when you understand the broader context, there is no other option available on the table.

So what are the key takeaways from this that you should be thinking about in the weeks and months ahead? My suggested list would include the following:

  • Schedule your update to version 9 of the application manually well in advance of August 16th 2018. DO NOT put yourself in a position where you are having an update forced upon you and give yourself the amount of time needed to successfully plan and test your upgrade in good time before January 31st 2019. I would also anticipate upgrade slots may start to fill up fast if you want to wait until as late as possible too 🙂
  • Start considering your future strategy in regards to the on-premise version of the application, if you are still supporting these environments. I speak with literally zero authority here, but I would not be surprised if the on-premise version of the application receives no further update at all in future or that dual-usage rights get revoked entirely.
  • Get familiar with the Common Data Service and Power Apps, as this is increasingly going to be the go-to area D365CE development and administration in the future. If you get the opportunity to attend one of Microsoft’s PowerApp in Day course, then be sure to go along without any hesitation. I would also be happy to speak to and help anyone with training in this area.
  • As with anything in life, embrace change, be proactive and identify areas of opportunity from this. A good one from my perspective is the potential to more easily introduce the staggering array of differing Business Application functionality, with the outcome being the ability to quickly deploy bespoke business applications that achieve any possible requirement and integrate with a wide variety of different services or datasets.

On February 10th 2015, Microsoft published Security Bulletin MS15-011, which detailed a recently discovered critical flaw in every major version of Windows from Server 2003 right the way through to Windows 8.1. The flaw, relating to how Group Policy handles data, potentially allows:

…remote code execution if an attacker convinces a user with a domain-configured system to connect to an attacker-controlled network. An attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

Microsoft was quick to release a corresponding security patch via the Windows Update service and each corresponding Operating System update can be downloaded and installed via the links below

Now, you may be asking at this point, why are you posting about this now in mid-2018? Surely the vulnerability has been addressed on all affected systems that are patched regularly? Well, as I found out very recently, this security patch is one of many released by Microsoft that requires additional administrator intervention after installation to ensure that the exploit hole is properly filled. The modifications required can be completed either via the Local Group Policy Editor for single machines not joined to a domain or Group Policy Management Console from a domain controller. The second option is naturally preferred if you are managing a large estate of Windows machines. Below are summarised steps that should provide the appropriate guidance on applying the fix for both environments

  1. Navigate to the Management Editor and expand and open up the Computer Configuration/Policies/Administrative Templates/Network/Network Provider folder path.

  1. On the right-hand pane, you should see an item called Hardened UNC Paths, marked in a state of Not configured. Click on it to open its properties

  1. There are then a couple of steps that need to be completed on the pop-up window that appears:
    • Ensure that the Enabled box is selected.
    • In the Options tab, scroll down to the Show… button and press it. The options at this stage depend upon your specific environment. For example, let’s assume that you have a domain with a file server called MyServer, which is configured for shared access. The most appropriate option, in this case, would be a Value name of \\MyServer\* with a Value of RequireMutualAuthentication=1, RequireIntegrity=1. Another example scenario could be that multiple Servers are used for sharing out access out to a share called Company. In this case, you could use the Value name option of \\*\Company with a Value of RequireMutualAuthentication=1, RequireIntegrity=1. Both of these examples are reproduced in the screenshot below, for your reference. Press the OK button to confirm the UNC path fields and Apply to make the policy change.

 

  1. The final step will be to enforce a group policy refresh on the target machine and any others on the domain. This can be done by executing the gpupdate /force Command Prompt cmdlet and confirming that no errors are generated in the output.

And that’s it! Your Windows domain/computer should now be properly hardened against the vulnerability 🙂

This whole example represents an excellent case study on the importance of regularly reviewing security bulletins or announcements from Microsoft. The process of carrying out Windows updates can often become one of those thankless tasks that can grind the gears of even the most ardent server administrators. With this in mind, it can be expected that a degree of apathy or lack of awareness regarding the context for certain updates can creep in, leading to situations where issues like this only get flagged up during a security audit or similar. I would strongly urge anyone who is still running one or all of the above Operating Systems to check their group policy configuration as soon as possible to verify that the required changes indicated in this post have been applied.

Once upon a time, there was a new cloud service known as Windows Azure. Over time, this cloud service developed with new features, became known more generally as just Azure, embraced the unthinkable from a technology standpoint and also went through a complete platform overhaul. Longstanding Azure users will remember the “classic” portal, with its very…distinctive…user interface:

Image courtesy of Microsoft

As the range of different services offered on Azure increased and the call for more efficient management tools became almost deafening, Microsoft announced the introduction of a new portal experience and Resource Group Management for Azure resources, both of which are now the de facto means of interacting with Azure today. The old style portal indicated above was officially discontinued earlier this year. In line with these changes, Microsoft introduced new, Resource Manager compatible versions of pretty much every major service available on the “classic” portal…with some notable exceptions. The following “classic” resources can still be created and worked with today using the new Azure portal:

This provides accommodation for those who are still operating compute resources dating back to the days of yonder, allowing you to create and manage resources that may be needed to ensure the continued success of your existing application deployment. In most cases, you will not want to create these “classic” resources as part of new project work, as the equivalent Resource Manager options should be more than sufficient for your needs. The only question mark around this concerns Cloud Services. There is no equivalent Resource Manager resource available currently, with the recommended option for new deployments being Azure Service Fabric instead. Based on my research online, there appears to be quite a feature breadth between both offerings, with Azure Service Fabric arguably being overkill for more simplistic requirements. There also appears to be some uncertainty over whether Cloud Services are technically considered deprecated or not. I would highly recommend reading Andreas Helland’s blog post on the subject and form your own opinion from there.

For both experiences, Microsoft provided a full set of automation tools in PowerShell to help developers carry out common tasks on the Azure Portal. These are split out into the standard Azure cmdlets for the “classic” experience and a set of AzureRM cmdlets for the new Resource Management approach. Although the “classic” Azure resource cmdlets are still available and supported, they very much operate in isolation – that is, if you have a requirement to interchangeably create “classic” and Resource Manager resources as part of the same script file, then you are going to encounter some major difficulties and errors. One example of this is that the ability to switch subscriptions that you have access, but not ownership, to becomes nigh on impossible to achieve. For this reason, I would recommend utilising AzureRM cmdlets solely if you ever have a requirement to create classic resources to maintain an existing deployment. To help accommodate this scenario, the New-AzureRmResource cmdlet really becomes your best friend. In a nutshell, it lets you create any Azure Resource of your choosing when executed. The catch around using it is that the exact syntax to utilise as part of the -ResourceType parameter can take some time to discover, particularly in the case of working with “classic” resources. What follows are some code snippets that, hopefully, provide you with a working set of cmdlets to create the “classic” resources highlighted in the screenshot above.

Before you begin…

To use any of the cmdlets that follow, make sure you have connected to Azure, selected your target subscription and have a Resource Group created to store your resources using the cmdlets below. You can obtain your Subscription ID by navigating to its properties within the Azure portal:

#Replace the parameter values below to suit your requirements

$subscriptionID = 36ef0d35-2775-40f7-b3a1-970a4c23eca2
$rgName = 'MyResourceGroup'
$location = 'UK South'

Set-ExecutionPolicy Unrestricted
Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId $subscriptionID
#Create Resource Group
New-AzureRMResourceGroup -Name $rgName -Location $location

With this done, you should hopefully encounter no problems executing the cmdlets that follow.

Cloud Services (classic)

#Create an empty Cloud Service (classic) resource in MyResourceGroup in the UK South region

New-AzureRmResource -ResourceName 'MyClassicCloudService' -ResourceGroupName $rgName `
                    -ResourceType 'Microsoft.ClassicCompute/domainNames' -Location $location -Force
                    

Disks (classic)

#Create a Disk (classic) resource using a Linux operating system in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyClassicDisk' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageaccounts/disks' ` 
                    -Location $location `
                    -PropertyObject @{'DiskName'='MyClassicDisk' 
                    'Label'='My Classic Disk' 
                    'VhdUri'='https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'
                    'OperatingSystem' = 'Linux'
                    } -Force
                    

Network Security groups (classic)

#Create a Network Security Group (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyNSG' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/networkSecurityGroups' `
                    -Location $location -Force
                    

Reserved IP Addresses (classic)

#Create a Reserved IP (classic) resource in MyResourceGroup in the UK South region.

New-AzureRmResource -ResourceName 'MyReservedIP' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/reservedIps' `
                    -Location $location -Force
                    

Storage Accounts (classic)

#Create a Storage Account (classic) resource in MyResourceGroup in the UK South region.
#Storage account with use Standard Locally Redundant Storage

New-AzureRmResource -ResourceName 'MyStorageAccount' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/StorageAccounts' ` 
                    -Location $location -PropertyObject @{'AccountType' = 'Standard-LRS'} -Force
                    

Virtual Networks (classic)

#Create a Virtual Network (classic) resource in MyResourceGroup in the UK South Region

New-AzureRmResource -ResourceName 'MyVNET' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicNetwork/virtualNetworks' `
                    -Location $location
                    -PropertyObject @{'AddressSpace' = @{'AddressPrefixes' = '10.0.0.0/16'}
                                      'Subnets' = @{'name' = 'MySubnet'
                                                    'AddressPrefix' = '10.0.0.0/24'
                                                    }
                                     }
                                     

VM Images (classic)

#Create a VM image (classic) resource in MyResourceGroup in the UK South region.
#Needs a valid VHD in a compatible storage account to work correctly

New-AzureRmResource -ResourceName 'MyVMImage' -ResourceGroupName $rgName -ResourceType 'Microsoft.ClassicStorage/storageAccounts/vmImages' `
                    -Location $location `
                    -PropertyObject @{'Label' = 'MyVMImage Label'
                    'Description' = 'MyVMImage Description'
                    'OperatingSystemDisk' = @{'OsState' = 'Specialized'
                                              'Caching' = 'ReadOnly'
                                              'OperatingSytem' = 'Windows'
                                              'VhdUri' = 'https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd'}
                    }

Conclusions or Wot I Think

The requirement to work with the cmdlets shown in this post should only really be a concern for those who are maintaining “classic” resources as part of an ongoing deployment. It is therefore important to emphasise not to use these cmdlets to create resources for new projects. Alongside the additional complexity involved in constructing the New-AzureRmResource cmdlet, there is an abundance of new, updated AzureRM cmdlets at your disposal that enables you to more intuitively create the correct types of resources. The key benefit that these examples provide is the ability to use a single Azure PowerShell module for the management of your entire Azure estate, as opposed to having to switch back and forth between different modules. It is perhaps a testament to how flexible Azure is that cmdlets like the New-AzureRmResource exist in the first place, ultimately enabling anybody to fine-tune deployment and maintenance scripts to suit any conceivable situation.

The Voice of the Customer (VoC) add-on solution for Dynamics 365 Customer Engagement (D365CE) presents a really nice way of incorporating survey capabilities within your existing Dynamics application estate, without any additional cost or significant administrative overhead. I’ve talked about the tool previously, within the context of specific application errors, and I can attest to its capabilities – both as a standalone solution and as one that can be leveraged alongside other D365CE functionality to generate additional value.

One feature that is particularly useful is the ability to include diverse Survey Response controls. This can cover the range of anticipated user inputs that most web developers would be used to – text inputs, ratings, date pickers etc. – along with more marketing specific choices such as Net Promoter Score and even a Smilies rating control. The final one of these really does have to be seen to wholly appreciate:

I hope you agree that this is definitely one of those features that becomes so fun that it soaks up WAY more time than necessary 🙂

One of the final options that VoC provides you is the ability to upload files to a Survey Response, which is stored within the application and made retrievable at any time by locating the appropriate Survey Response record. You can customise the guidance text presented to the user for this control, such as in the example below:

Uploaded files are then saved onto an Azure Blob Storage location (which you don’t have direct access to), with the access URL stored within D365CE. The inclusion of this feature does provide the capability to accommodate several potential business scenarios, such as:

  • Allowing a service desk to create an automated survey that allows error logs or screenshots to be uploaded for further diagnosis.
  • The gathering of useful photographic information as part of a pre-qualification process for a product installation.
  • Enabling customers to upload a photo that provides additional context relating to their experience – either positive or negative.

Putting all of this aside, however, and there are a few things that you should bear in mind when first evaluating this feature for your particular requirements. What follows is my list of major things to be aware of, along with some tips to sidestep any issues.

Privacy concerns…

To better understand why this is relevant, it helps to be aware of exactly how files can be stored on Azure. Azure file storage works on the principle of “blobs” (i.e. files), which can only be created within a corresponding Storage Container. These can be configured using a couple of different options, depending on how you would like to access your data, which is elaborated upon in this really helpful article:

You can configure a container with the following permissions:

  • No public read access: The container and its blobs can be accessed only by the storage account owner. This is the default for all new containers.

  • Public read access for blobs only: Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container.

  • Full public read access: All container and blob data can be read by anonymous request. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account.

To presumably mitigate the need for complex deployments of the VoC solution, all uploaded Survey Response files are saved in Full public read access storage containers, meaning that anyone with the URL can access these files. And, as mentioned already, administrators have no direct access to the Azure Storage Account to modify these permissions, potentially compounding this access problem. Now, before you panic too much, the VoC solution deliberately structures the uploaded file in the following format:

https://<VoC Region Identifier>.blob.core.windows.net/<Survey Response GUID>-files/<File GUID>-<Example File>

This degree of complexity added during this goes a long way towards satisfying any privacy concerns – it would be literally impossible for a human being or computer to guess what a particular uploaded file path is, even if they did have the Survey Response record GUID – but this still does not address the fact that the URL can be freely accessed and shared by anyone with sufficient permissions over the Survey Response entity in D365CE. You should, therefore, take appropriate care when scoping your security privileges within D365CE and look towards carrying out a Privacy Impact Assessment (PIA) over the type of data you are collecting via the upload file control.

…even after you delete a Survey Response.

As mentioned above, the Blob Storage URL is tagged to the Survey Response record within D365CE. So what happens when you delete this record? The answer, courtesy of Microsoft via a support request:

Deleting Survey Response should delete the file uploaded as part of the Survey Response

Based on my testing, however, this does not look to be the case. My understanding of the VoC solution is that it needs to regularly synchronise with components located on Azure, which can lead to a delay in certain actions completing (publish a Survey, create Survey Response record etc.). However, a file from a Survey Response test record that I deleted still remains accessible via its URL up to 8 hours after completing this action. This, evidently, raises a concern over what level of control you have over potentially critical and sensitive data types that may be included in uploaded files. I would urge you to carry out your own analysis as part of a PIA to sufficiently gauge what impact, if any, this may have on your data collection (and, more critically, storage) activities.

Restrictions

For the most part, file upload controls are not a heavily constrained feature, but it is worthwhile to keep the following restrictions in mind:

  • Executable file types are not permitted for upload (.exe, .ps1, .bat etc.)
  • Larger file types may not upload successfully, generating 404 server errors within the control. There is not a documented size limitation, but my testing would indicate that files as big as 60MB will not upload correctly.
  • Only one file upload control is allowed per survey.

The last of these limitations is perhaps the most significant constraint. If you do have a requirement for separate files to be uploaded, then the best option is to provide instructions on the survey, advising users to compress their files into a single .zip archive before upload.

Conclusions or Wot I Think

Despite what this post may be leaning towards, I very much believe the VoC solution and, in particular, the ability to upload Survey Response files, is all in a perfect, working condition. Going a step further on this, when viewed from a technical standpoint, I would even say that its method of execution is wholly justified. With the advent of the General Data Protection Regulations (GDPR) earlier this year, current attention is all around ensuring that appropriate access controls over data have been properly implemented, that ensures the privacy of individuals is fully upheld. Here is where the solution begins to fall over to a degree and evidence of the journey that VoC has made in becoming part of the Dynamics 365 “family” becomes most apparent. As can be expected, any product which is derived from an external acquisition will always present challenges when being “smushed” with a new application system. I have been informed that there is an update coming to the VoC solution in August this year, with a range of new features that may address some of the data privacy concerns highlighted earlier. For example, the option will be provided for administrators to delete any uploaded file within a Survey Response on-demand. Changes like this will surely go a long way towards providing the appropriate opportunities for VoC to be fully utilised by businesses looking to integrate an effective, GDPR-proof, customer survey tool.

I was very honoured and excited to be involved with the very first D365UG/CRMUG North West Chapter Meeting earlier this week, hosted at the Grindsmith just off Deansgate in Manchester. This is the first time that a D365UG/CRMUG event has taken place in the North West, and we were absolutely stunned by the level of interest this event generated – all in all, 37 people attended, representing a broad spectrum of Microsoft partners and organisations of varying sizes.

I very much got the impression that the amount of Dynamics 365 Customer Engagement (D365CE) users in the North West far exceed any number you could assume, and I am really looking forward to seeing how future events develop as we (hopefully!) get more people involved. Despite a few technical glitches with the AV facilities, the feedback we have received to both presentations has been overwhelmingly positive, so a huge thanks to everyone who turned up and to our presenters for the evening

In this post, I wanted to share my thoughts on both sets of presentations, provide an answer to some of the questions that we didn’t get around to due to time constraints and, finally, provide a link to the slide deck from the evening.

Transform Group – The Patient Journey

The first talk of the evening was provided courtesy of Bill Egan at Edgewater Fullscope, who took us through Transform Group’s adoption of D365CE. Bill provided some really useful insights – from both an organisation and a Microsoft partner’s perspective – of the challenges that any business can face when moving across to a system like D365CE. As with any IT project, there were some major hurdles along the way, but Bill very much demonstrated how the business was able to roll with the punches and the very optimistic 16 weeks planned deployment presents an, arguably, essential blueprint in how IT projects need to be executed; namely, targeted towards delivering as much business benefit in a near immediate timeframe.

The key takeaways from me out of all this was in emphasising the importance of adapting projects quickly to changing business priorities and to recognise the continued effort required to ensure that business systems are regularly reviewed and updated to suit the requirements of not just the users, but the wider business.

Power Up Your Power Apps

The second presentation was literally a “head to head” challenge with Craig Bird from Microsoft and Chris “The Tattooed CRM Guy” Huntingford from Hitachi Solutions, seeing who could build the best PowerApps. In the end, the voting was pretty unanimous and Craig was the proud recipient of a prize worthy of a champion. I hope Craig will be wearing his belt proudly at future events 🙂

I found the presentation particularly useful in clearing up a number of worries I had around the Common Data Service and the future of D365CE. The changes that I saw are very much emphasised towards providing a needed facelift to the current customisation and configuration experience within D365CE, with little requirement to factor in migration and extensive learning of new tools to ensure that your D365CE entities are available within the Common Data Service. Everything “just works” and syncs across flawlessly.

https://twitter.com/joejgriffin/status/1009531079492079622

In terms of who had the best app, I think Craig literally knocked the socks off everyone with his translator application. Although I include myself in this category, I was still surprised to see that PowerApps supports Power BI embedded content, courtesy of Chris – a really nice thing to take away for any aspirational PowerApp developer.

Questions & Answers

We managed to get around to most questions for the first presentation but not for the second one. Here’s a list of all the questions that I am able to provide an answer to. I’m still in the process of collating together responses to the other questions received, so please keep checking back if you’re burning question is not answered below:

Presentation

For those who missed the event or are wanting to view the slides without a purple tinge, they will be downloadable for the next 30 days from the following location:

https://jamesgriffin-my.sharepoint.com/:p:/g/personal/joe_griffin_gb_net/EbRAws0urypMkrGyqCzoTdMB4ggjUQI4_npQlEZAYhea4w?e=U3lvf5

Looking Ahead

The next chapter meeting is scheduled to take place on the 2nd of October (venue TBC). If you are interested in getting involved, either through giving a presentation or in helping to organise the event, then please let us know by dropping us a message:

  • Email: crmuguknw@gmail.com
  • Twitter: @CRMUG_UK_NW