Back only a few years ago, when events such as a reality TV star becoming President of the USA were the stuff of fantasy fiction, Microsoft had a somewhat niche Customer Relationship Management system called Dynamics CRM. Indeed, the very name of this blog still attests to this fact. The “CRM” acronym provides a succinct mechanism for describing the purposes of the system without necessarily confusing people straight out of the gate. For the majority of its lifespan, “CRM” very accurately summarised what the core system was about – namely, a sales and case management system to help manage customer relations.

Along the way, Microsoft acquired a number of organisations and products that offered functionality that, when bolted onto Dynamics CRM, greatly enhanced the application as a whole. I have discussed a number of these previously on the blog, so I don’t propose to retread old ground. However, some notable exceptions from this original list include:

Suffice to say, by the start of 2016, the range of functionality available within Dynamics CRM was growing each month and – perhaps more crucially – there was no clear mechanism in place from a billing perspective for each new solution offered. Instead, if you had a Dynamics CRM Professional license, guess what? All of the above and more was available to you at no additional cost.

Taking this and other factors into account, the announcement in mid-2016 of the transition towards Microsoft Dynamics 365 can be seen as a welcome recognition of the new state of play and the ushering in of Dynamics CRM out of the cold to stand proud amongst the other, major Microsoft product offerings. Here’s a link to the original announcement:

Insights from the Engineering Leaders behind Microsoft Dynamics 365 and Microsoft AppSource

The thinking behind the move was completely understandable. Dynamics CRM could no longer be accurately termed as such, as the core application was almost unrecognisable from its 2011 version. Since then, there has been a plethora of additional announcements and changes to how Dynamics 365 in the context of CRM is referred to in the wider offering. The road has been…rocky, to the say the least. Whilst this can be reasonably expected with such a seismic shift, it nevertheless does present some challenges when talking about the application to end users and customers. To emphasise this fact, let’s take a look at some of the “bumps” in the road and my thoughts on why this is still an ongoing concern.

Dynamics 365 for Enterprise and Business

The above announcement did not go into greater detail about how the specific Dynamics 365 offerings would be tailored. One of the advantages of the other offerings within the Office 365 range of products is the separation of business and enterprise plans. These typically allow for a reduced total cost of ownership for organisations under a particular size within an Office 365 plan, typically with an Enterprise version of the same plan available with (almost) complete feature parity, but with no seat limits. With this in mind, it makes sense that the initial detail in late 2016 confirmed the introduction of business and enterprise Dynamics 365 plans. As part of this, the CRM aspect of the offering would have sat firmly within the Enterprise plan, with – you guessed it – Enterprise pricing to match. The following article from encore at the time provides a nice breakdown of each offering and the envisioned target audience for each. Thus, we saw the introduction of Dynamics 365 for Enterprise as a replacement term for Dynamics CRM.

Perhaps understandably, when many businesses – typically used to paying £40-£50 per seat for their equivalent Dynamics CRM licenses – discovered that they would have to move to Enterprise plans and pricing significantly in excess of what they were paying, there were some heads turned. Microsoft Partners also raised a number of concerns with the strategy, which is why it was announced that the Business edition and Enterprise edition labels were being dropped. Microsoft stated that they would:

…focus on enabling any organization to choose from different price points for each line of business application, based on the level of capabilities and capacity they need to meet their specific needs. For example, in Spring 2018, Dynamics 365 for Sales will offer additional price point(s) with different level(s) of functionality.

The expressed desire to enable organisations to “choose” what they want goes back to what I mentioned at the start of this post – providing a billing/pricing mechanism that would support the modular nature of the late Dynamics CRM product. Its introduction as a concept at this stage comes a little late in the whole conversation regarding Dynamics 365 and represents an important turning point in defining the vision for the product. Whether this took feedback from partners/customers or an internal realisation to bring this about, I’m not sure. But it’s arrival represents the maturity in thinking concerning the wider Dynamics 365 offering.

Dynamics 365 for Customer Engagement

Following the retirement of the Business/Enterprise monikers and, in a clear attempt to simplify and highlight the CRM aspect of Dynamics 365, the term Customer Engagement started to pop-up across various online support and informational posts. I cannot seem to locate a specific announcement concerning the introduction of this wording, but its genesis appears to be early or mid-2017. The problem is that I don’t think there is 100% certainty yet on how the exact phrasing of this terminology should be used.

The best way to emphasise the inconsistency is to look to some examples. The first is derived from the name of several courses currently available on the Dynamics Learning Portal, published this year:

Now, take a look at the title and for the following Microsoft Docs article on impending changes to the application:

Notice it yet? There seems to be some confusion about whether for should be used when referring to the Customer Engagement product. Whilst the majority of articles I have found online seem to suggest that Dynamics 365 Customer Engagement is the correct option, again, I cannot point to a definitive source that states without question the correct phraseology that should be used. Regardless, we can see here the birth of the Customer Engagement naming convention, which I would argue is a positive step forward.

The Present Day: Customer Engagement and it’s 1:N Relationships

Rather handily, Customer Engagement takes us straight through to the present. Today, when you go to the pricing website for Dynamics 365, the following handy chart is presented that helps to simplify all of the various options available across the Dynamics 365 range:

This also indirectly confirms a few things for us:

  • Microsoft Dynamics 365 Customer Engagement and not Microsoft Dynamics 365 for Customer Engagement looks to be the approved terminology when referring to what was previously Dynamics CRM.
  • Microsoft Dynamics 365 is the overarching name for all modules that form a part of the overall offering. It has, by the looks of things, replaced the original Dynamics 365 for Enterprise designation.
  • The business offering – now known as Dynamics 365 Business Central – is in effect a completely separate offering from Microsoft Dynamics 365.

When rolled together, all of this goes a long way towards providing the guidance needed to correctly refer to the whole, or constituent parts, of the entire Dynamics 365 offering.

With that being said, can it be reliably said that the naming “crisis” has ended?

My key concern through all of this is a confusing and conflicting message being communicated to customers interested in adopting the system, to the potential end result of driving them away to competitor systems. This appears to have been the case for about 1-2 years since the original Dynamics 365 announcement, and a large part of this can perhaps be explained by the insane acquisition drive in 2015/6. Now, with everything appearing to slot together nicely and the pricing platform in place to support each Dynamics 365 “module” and the overall offering, I would hope that any further change in this area is minimal. As highlighted above though, there is still some confusion about the correct replacement terminology for Dynamics CRM – is it Dynamics 365 Customer Engagement or Dynamics 365 for Customer Engagement? Answers on a postcode, please!

Another factor to consider amongst all of this is that naming will constantly be an issue should Microsoft go through another cycle of acquisitions focused towards enhancing the Dynamics 365 offering. Microsoft’s recent acquisition plays appear to be more focused towards providing cost optimization services for Azure and other cloud-based tools, so it can be argued that a period of calm can be expected when it comes to incorporating acquired ISV solutions into the Dynamics 365 product range.

I’d be interested to hear from others on this subject, so please leave a comment below if you have your own thoughts on the interesting journey from Dynamics CRM to Dynamics 365 Customer Engagement

After going through a few separate development cycles involving Dynamics 365 Customer Engagement (D365CE), you begin to get a good grasp of the type of tasks that need to be followed through each time. Most of these are what you may expect – such as importing an unmanaged/managed solution into a production environment – but others can differ depending on the type of deployment. What ultimately emerges as part of this is the understanding that there are certain configuration settings and records that are not included as part of a Solution file and which must be migrated across to different environments in an alternate manner.

The application has many record types that fit under this category, such as Product or Product Price List. When it comes to migrating these record types into a Production environment, those out there who are strictly familiar with working inside the application only may choose to utilise the Advanced Find facility in the following manner:

  • Generate a query to return all of the records that require migration, ensuring all required fields are returned.
  • Export out the records into an Excel Spreadsheet
  • Import the above spreadsheet into your target environment via the Data Import wizard.

And there would be nothing wrong with doing things this way, particularly if your skillset sits more within a functional, as opposed to technical, standpoint. Where you may come unstuck with this approach is if you have a requirement to migrate Subject record types across environments. Whilst a sensible (albeit time-consuming) approach to this requirement could be to simply create them from scratch in your target environment, you may fall foul of this method if you are utilising Workflows or Business Rules that reference Subject values. When this occurs, the application looks for the underlying Globally Unique Identifier (GUID) of the Subject record, as opposed to the Display Name. If a record with this exact GUID value does not exist within your target environment, then your processes will error and fail to activate. Taking this into account, should you then choose to follow the sequence of tasks above involving Advanced Find, your immediate stumbling block will become apparent, as highlighted below:

As you can see, there is no option to select the Subject entity for querying, compounding any attempts to get them exported out of the application. Fortunately, there is a way to get overcome this via the Configuration Migration tool. This has traditionally been bundled together as part of the applications Solution Developer Kit (SDK). The latest version of the SDK for 8.2 of the application can be downloaded from Microsoft directly, but newer versions – to your delight or chagrin – are only available via NuGet. For those who are unfamiliar with using this, you can download version 9.0.2.3 of the Configuration Migration tool alone using the link below:

Microsoft.CrmSdk.XrmTooling.ConfigurationMigration.Wpf.9.0.2.3

With everything downloaded and ready to go, the steps involved in migrating Subject records between different D365CE environments are as follows:

  1. The first step before any export can take place is to define a Schema – basically, a description of the record types and fields you wish to export. Once defined, schemas can be re-used for future export/import jobs, so it is definitely worth spending some time defining all of the record types that will require migration between environments. Select Create schema on the CRM Configuration Migration screen and press Continue.

  1. Login to D365CE using the credentials and details for your specific environment.

  1. After logging in and reading your environment metadata, you then have the option of selecting the Solution and Entities to export. A useful aspect to all of this is that you have the ability to define which entity fields you want to utilise with the schema and you can accommodate multiple Entities within the profile. For this example, we only want to export out the Subject entity, so select the Default Solution, the entity in question and hit the Add Entity > button. Your window should resemble the below if done correctly:

  1. With the schema fully defined, you can now save the configuration onto your local PC. After successfully exporting the profile, you will be asked whether you wish to export the data from the instance you are connected to. Hit Yes to proceed.

  1. At this point, all you need to do is define the Save to data file location, which is where a .zip file containing all exported record data will be saved. Once decided, press the Export Data button. This can take some time depending on the number of records being processed. The window should update to resemble the below once the export has successfully completed. Select the Exit button when you are finished to return to the home screen.

  1. You have two options at this stage – either you can either exit the application entirely or, if you have your target import environment ready, select the Import data and Continue buttons, signing in as required.

  1. All that remains is to select the .zip file created in step 5), press the Import Data button, sit back and confirm that all record data imports successfully.

It’s worth noting that this import process works similarly to how the in-application Import Wizard operates with regards to record conflicts; namely, if a record with the same GUID value exists in the target instance, then the above import will overwrite the record data accordingly. This can be helpful, as it means that changes to records such as the Subject entity can be completed safely within a development context and promoted accordingly to new environments.

The Configuration Migration tool is incredibly handy to have available but is perhaps not one that it is shouted from the rooftops that often. It’s usefulness not just extends to the Subject entity, but also when working with the other entity types discussed at the start of this post. Granted, if you do not find yourself working much with Processes that reference these so-called “configuration” records, then introducing the above step as part of any release management process could prove to be an unnecessary administrative burden. Regardless, there is at least some merit to factor in the above tool as part of an initial release of a D365CE solution to ensure that all development-side configuration is quickly and easily moved across to your production environment.

When it comes to maintaining any kind of Infrastructure as a Service (IaaS) resource on a cloud provider, the steps involved are often more complex when compared with equivalent Platform as a Service (PaaS) offerings. This is compensated for by the level of control IaaS resources typically grant over the operating system environment and the applications that reside herein. This can be useful if, for example, your application needs to maintain a specific version of a framework/programming language and you do not want your chosen cloud provider to patch this behind the scenes, without your knowledge. One of the major trade-offs as part of all this, however, is the expectation that completing a comprehensive disaster recovery plan is no longer such a cakewalk, requiring instead significant effort to design, implement and test on regular intervals.

Microsoft Azure, like other cloud providers, offer Virtual Machines as their most “freest” IaaS offering. This facilitates a whole breadth of customisation options for the underlying operating system, including the type (Windows or Linux), default software deployed and underlying network configuration. The same problems – with respect to disaster recovery – still exist and may even be compounded if your Virtual Machine is host to an application that is publically available across the internet. Whilst you are able to make a copy of your VM somewhat quickly, there is no easy way to migrate across the public IP address of a Virtual Machine without considerable tinkering in the portal. This can lead to delays in initiating any failover or restore action, as well as the risk of introducing human error into the equation.

Fortunately, with a bit of PowerShell scripting, it is possible to fully automate this process. Say, for example, you need to restore a Virtual Machine using Managed Disks to a specific snapshot version. ensuring that the network configuration is mirrored and copied across to the new resource. The outline steps would look like this when getting things scripted out in PowerShell:

  1. Login to your Azure account and subscription where the VM resides.
  2. Create a new Managed Disk from a Recovery Services Vault snapshot.
  3. Obtain the deployment properties of the existing Virtual Machine and utilise this for the baseline configuration of the new Virtual Machine.
  4. Associate the newly created Managed Disk with the configuration created in step 3.
  5. Create a placeholder Public IP Address and swap this out with the existing Virtual Machine.
  6. Define a new Network Interface for the configuration created in step 3 and associate the existing Public IP Address to this.
  7. Create a new Network Security Group for the Network Interface added in step 6, copying all rules from the existing Virtual Machine Network Security Group
  8. Create the new Virtual Machine from the complete configuration properties.

With all these steps completed, a consistent configuration is defined to create a Virtual Machine that is almost indistinguishable from the existing one and which, more than likely, has taken less than 30 minutes to create. 🙂 Let’s jump in and take a look at an outline script that will accomplish all of this.

Before we begin…

One of the pre-requisites for executing this script is that you are have backed up your Virtual Machine using Recovery Services Vault and performed a recovery of a previous image snapshot to a Storage Account location. The below script also assumes the following regarding your target environment:

  • Your Virtual Machine must be using Managed Disks and has only one Operating System disk attached to it.
  • The affected Virtual Machine must be switch off and in a Stopped (deallocated) state on the platform.
  • The newly created Virtual Machine will reside in the same Virtual Network as the existing one.
  • The existing Network Security Group for the Virtual Machine utilises the default security rules added upon creation

That’s enough talking now! Here’s the script:

#Specify variables below:

#subID: The subscription GUID, obtainable from the portal
#rg: The Resource Group name of where the Virtual Machine is located
#vmName: The name of the Virtual Machine
#vhdURI: The URL for the restored Virtual Hard Disk (VHD)
#diskName: The name for the newly created managed disk
#location: Location for the newly created managed disk
#storageAccountName: Name of the storage account where the restored VHD is stored
#storageAccountID: The Resource ID for the storage account where the VHD is stored
#containerName: Name of the container where the restored VHD is stored
#blobName: Name of the restored VHD config file, in JSON format
#oldNICName: Name of the existing VM Network Interface Card
#newNICName: Name of the Network Interface Card to be created for the copied VM
#newPIPName: Name of the new Public IP Address that will be swapped with the existing one
#oldPIPName: Name of the existing Public IP Address that will be swapped out with the new one.
#vnetName: Name of the Virtual Network used with the current Virtual Machine
#vnetSubnet: Name of the subnet on the Virtual Network used with the current Virtual Machine.
#$oldNSG: Name of the existing Network Security Group for the Virtual Machine
#$newNSG: Name for the newly created Network Security Group for the new Virtual Machine
#$desinationPath: Path for the VM config file to be downloaded to
#$ipConfig: Name of the IP config used for the Virtual Network

$subID = '8fb17d52-b6f7-43e4-a62d-60723ec6381d'
$rg = 'myresourcegroup'
$vmName = 'myexistingvm'
$vhdURI = 'https://mystorageaccount.blob.core.windows.net/vhde00f9ddadb864fbbabef2fd683fb350d/bbc9ed4353c5465782a16cae5d512b37.vhd'
$diskName = 'mymanagedisk'
$location = 'uksouth'
$storageAccountName = 'mystorageaccount'
$storageAccountID = '/subscriptions/5dcf4664-4955-408d-9215-6325b9e28c7c/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount'
$containerName = 'vhde00f9ddadb864fbbabef2fd683fb350d'
$blobName = 'config9064da15-b889-4236-bb8a-38670d22c066.json'
$newVMName = 'mynewvm'
$oldNICName = 'myexistingnic'
$newNICName = 'mynewnic'
$newPIPName = 'myexistingpip'
$oldPIPName = 'mynewpip'
$vnetName = 'myexistingvnet'
$vnetSubnet = 'myexistingsubnet'
$oldNSG = 'myexistingnsg'
$newNSG = 'mynewnsg'
$destinationPath = 'C:\vmconfig.json'
$ipConfig = 'myipconfig'

#Login into Azure and select the correct subscription

Login-AzureRmAccount
Select-AzureRmSubscription -Subscription $subID

#Get the VM properties that requires restoring - used later.

$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rg

#Create managed disk from the storage account backup.

$diskConfig = New-AzureRmDiskConfig -AccountType 'StandardLRS' -Location $location -CreateOption Import -StorageAccountId $storageAccountID -SourceUri $vhdURI
$osDisk = New-AzureRmDisk -Disk $diskConfig -ResourceGroupName $rg -DiskName $diskName

#Download VM configuration file and define new VM configuration from this file

Set-AzureRmCurrentStorageAccount -Name $storageAccountName -ResourceGroupName $rg
Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destinationPath
$obj = ((Get-Content -Path $destinationPath -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json

$newVM = New-AzureRmVMConfig -VMSize $obj.'properties.hardwareProfile'.vmSize -VMName $newVMName

#Add newly created managed disk to new VM config

Set-AzureRmVMOSDisk -VM $newVM -ManagedDiskId $osDisk.Id -CreateOption "Attach" -Windows

#Create new Public IP and swap this out with existing IP Address

$pip = New-AzureRmPublicIpAddress -Name $newPIPName -ResourceGroupName $rg -Location $location -AllocationMethod Static
$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rg
$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $vnetSubnet -VirtualNetwork $vnet
$oldNIC = Get-AzureRmNetworkInterface -Name $oldNICName -ResourceGroupName $rg
$oldNIC | Set-AzureRmNetworkInterfaceIpConfig -Name $ipConfig -PublicIpAddress $pip -Primary -Subnet $subnet
$oldNIC | Set-AzureRmNetworkInterface

#Define new VM network configuration settings, using existing public IP address, and add to configuration

$existPIP = Get-AzureRmPublicIpAddress -Name $oldPIPName -ResourceGroupName $rg
$nic = New-AzureRmNetworkInterface -Name $newNICName -ResourceGroupName $rg -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $existPIP.Id
$newVM = Add-AzureRmVMNetworkInterface -VM $newVM -Id $nic.Id

#Obtain existing Network Security Group and create a copy of it for the new Network Interface Card

$nsg = Get-AzureRmNetworkSecurityGroup -Name $oldNSG -ResourceGroupName $rg
$newNSG = New-AzureRmNetworkSecurityGroup -Name $newNSG -ResourceGroupName $rg  -Location $location
foreach($rule in $nsg.SecurityRules)
{
    $newNSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Access $rule.Access -Protocol $rule.Protocol -Direction $rule.Direction -Priority $rule.Priority -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange | Set-AzureRmNetworkSecurityGroup
}

$nic.NetworkSecurityGroup = $newNSG
$nic | Set-AzureRmNetworkInterface

#Create the VM. This may take some time to complete.

New-AzureRmVM -ResourceGroupName $rg -Location $location -VM $newVM

Conclusions or Wot I Think

Automation should be a key driver behind running an effective business and, in particular, any IT function that exists within. When architected prudently, repetitive and time wasting tasks can be eliminated and the ever pervasive risk of human error can be eliminated from business processes (unless, of course, the person defining the automation has made a mistake 🙂 ). The management of IaaS resources fits neatly into this category and, as I hope the example in this post has demonstrated, can take a particularly onerous task and reduce the complexity involved in carrying it out. This can help to save time and effort should the worst ever happen to your application. When compared with other cloud vendors, this is what ultimately makes Azure a good fit for organisations who are used to working with tools such as PowerShell; scenarios like this become almost a cakewalk to set up and require minimal additional study to get up and running.

Typically, when working with Dynamics 365 for Customer Service entities, you expect a certain type of behaviour. A good example of this in practice is entity record activation and the differences between Active and Inactive record types. In simple terms, you are generally restricted in the actions that can be performed against an Inactive record, most commonly being the modification of a field value. You can, however, perform actions such as deleting and reassigning records to other users in the application. The latter of these can be particularly useful if, for example, an individual leaves a business and you need to ensure that another employee requires access to old, inactive records.

When it comes to reporting on time intervals for when a record was last changed, I often – rightly or wrongly – see the Modified On field used for this purpose. This, essentially, stores the date and time of when the record was…well…last modified in the system! Since only more recent changes to the application have facilitated an alternative approach in reporting a record’s age and its current stage within a process, it is perhaps understandable why this field is often chosen when attempting to report, for example, the date on which a record was moved to Inactive status. Where you may encounter issues with this is if you a working in a similar situation highlighted above – namely, an individual leaving a business – and you need to reassign all of the inactive records owned by them. Doing either of these steps will immediately update the Modified On value to the current date and time, skewering any dependent reporting.

Fortunately, there is a way of getting around this, if you don’t have any qualms about opening up Visual Studio and putting together a plug-in in C#. Via this route, you can prevent the Modified On value of a record from being updated by capturing the original value and forcing the platform to commit this value to the database during the Pre-Operation stage of the transaction, as opposed to the date and time of when the record is reassigned. In fact, using this method, you can set the Modified On value to be whatever you want. Here’s the code that illustrates how to achieve both scenarios:

using System;

using Microsoft.Xrm.Sdk;

namespace Sample.OverrideCaseModifiedDate
{
    public class PreOpCaseAssignOverrideModifiedDate : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            //Obtain the execution context from the service provider.

            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

            //Extract the tracing service for use in debugging sandboxed plug-ins

            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            tracingService.Trace("Tracing implemented successfully!");

            if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)

            {
                Entity incident = (Entity)context.InputParameters["Target"];
                Entity preIncident = context.PreEntityImages["preincident"];

                //At this stage, you can either get the previous Modified On date value via the Pre Entity Image and set the value accordingly...

                incident["modifiedon"] = preIncident.GetAttributeValue<DateTime>("modifiedon");

                //Or alternatively, set it to whatever you want - in this example, we get the Pre Entity Image createdon value, add on an hour and then set the value

                DateTime createdOn = preIncident.GetAttributeValue<DateTime>("createdon");
                TimeSpan time = new TimeSpan(1, 0, 0);
                DateTime newModifiedOn = createdOn.Add(time);
                incident["modifiedon"] = newModifiedOn;

            }
        }
    }
}

When deploying out your plug-in code to the application (something that I hope regular readers of the blog will be familiar with), make sure that the settings you configure for your Step and the all-important Entity image resemble the images below:

Now, the more eagle-eyed readers may notice that the step is configured on the Update as opposed to the Assign message, which is what you (and, indeed, I when I first started out with this) may expect. Unfortunately, because the Input Parameters of the Assign message only returns two Entity Reference objects – the incident (Case) and systemuser (User) entities, respectively – as opposed to an Entity object for the incident entity, we have no means of interfering with the underlying database transaction to override the required field values. The Update message does not suffer from this issue and, by scoping the plug-in’s execution to the ownerid field only, we can ensure that it will only ever trigger when a record is reassigned.

With the above plug-in configured, you have the flexibility of re-assigning your Case records without updating the Modified On field value in the process or expanding this further to suit whatever business requirement is needed. In theory, as well, the approach used in this example could also be applied to other what we may term “system defined fields”, such as the Created On field. Hopefully, this post may prove some assistance if you find yourself having to tinker around with inactive Case records in the future.

The ability to develop a custom barcode scanning application, compatible with either a specific mobile, tablet or other device type, would traditionally be in the realm for a skilled developer to implement fully. The key benefit this brings is the ability to develop a highly tailored business application, with almost limitless potential. The major downside is, of course, the requirement for having this specific expertise within a business and sufficient time to plan, develop and test the application.

When it comes to using some of the newly introduced weapons in the Microsoft Online Services arsenal (such as Microsoft Flow, a topic which has been a recent focus on the blog), these traditional assumptions around application development can be discarded and empowering tools are provided to developers & non-developers to enable them to create highly custom business application without writing a single line of code. A perfect example of what can be achieved as a consequence has already been hinted at. With PowerApps, you have the ability to create a highly functional barcode scanning application that can perform a variety of functions – register event attendees details into a SQL database, lookup a Product stored on Dynamics 365 Customer Engagement and much, much more. The assumingly complex becomes bafflingly simplistic.

I’ve been lucky to (finally!) have a chance to play around with PowerApps and the barcode scanning control recently. What I wanted to do as part of this week’s blog post was provide a few tips on using this control, with the aim of easing anyone else’s journey when using it for the first time.

Make sure your barcode type is supported – and factor in multiple barcode types, if required

The one thing you have to get your head around fairly early on is that the barcode control can only support 1 barcode type at any given time. At the time of writing this post, the following barcode types are supported:

  • Codabar
  • Code 128
  • Code 39
  • EAN
  • Interleaved 2 of 5
  • UPC

Note in particular the lack of current support for QR codes, which are essentially a barcode on steroids. There are, however, plans to introduce this as an option in the very near future, so we won’t have to wait too long for this.

Be sure to RTFM

It can be easy to forget this rather forceful piece of advice, but the importance of familiarising yourself with the barcode control, its properties and some of the acknowledged limitations can go a long way towards streamlining the deployment of a workable barcode scanner app. I’ve collated together a list of the most useful articles below, which I highly encourage you to read through in their entirety:

Whilst PowerApps is generally quite a good tool to jump into without prerequisite knowledge, working with specific controls – such as this one – warrants a much more careful study to ensure you fully understand what is and isn’t supported. What’s also quite helpful about the articles (jumping back to barcode types for a second) is that an example is provided on how to create a drop-down control to change the type of barcode that can be scanned at runtime. Typically, for an internal app where a defined barcode type is utilised consistently, the requirement for this may not be present; but it is nice to know the option is there.

For the best success rate when scanning, ensure the barcode control occupies as much space as possible

On one of the example posts above, an example is provided of a barcode control situated atop a Gallery control with a search box. Whilst this may look appealing from a usability aspect, your main issue will be in ensuring that devices using the app can properly scan any potential barcode. For example, when testing iOS devices in this scenario, I found it literally impossible to ensure a consistent scan rate. I’ve not tested with other mobile devices to confirm the same issue occurs, but given that the issue on iOS appears to be down to a “memory issue”, I can imagine the same problem occurring for lower spec Android devices as well. Microsoft’s recommendation for optimal barcode scanning controls provides a workaround solution that can be adapted:

To delay running out of memory on devices that are running iOS, set the Height property of the Barcode control to 700 (or lower) and the Scanrate property to 30.

With this in mind, you can create a dedicated screen on your PowerApp for barcode scanning that occupies sufficient space, whilst also providing an easy to operate experience for your end users. An optimal layout example is provided below:

Then, with a bit of expression/formula trickery, functionality for this screen can be nicely rounded out as follows:

  • Set the Default property value of the Text Input control to the name of the scanner control. This will then place the value of any scanned barcode within the input control, allowing the user to modify if necessary.
  • For the Scan button, define the Back() expression for the OnSelect action. Then, set the Default property value to the Text Input control on the screen above to whatever control needs to have the value of the scanned barcode control (e.g. if working with a gallery/search box, then you would set this on the search box).
  • Finally, the Cancel button needs to do the same as the Scan button, in addition to safely discarding whatever has been scanned. This can be achieved by specifying the Clear command for the control that is being passed the scanned value when the Scan button is selected. Rather handily, we can specify multiple expression/formulas for a single event via the use of a semi-colon delimiter. Therefore, to go back to the previous screen and clear the value of the Text Input control called MySearchBox, use the following expression:
    • Back();MySearchBox.Clear

By having a barcode control that mimics the size indicated above, scan rates for iOS devices are improved by a significant margin and the control becomes a dream to use 🙂

And the rest…

To round things off, there are a couple of other concepts surrounding barcode controls that I thought may be useful to point out:

  • The Camera property value is not documented at all for this control, but my understanding is that it provides a mechanism for specifying the device camera number/ID to use for the control. So, for example, if you have a Windows 10 tablet device and want to ensure that the front camera is used to scan barcodes for an entry registration system, you can use this option to force this. Utilising this setting, I would imagine, limits your app across multiple device types, so I would advise caution in its use. Let me know in the comments below if you have any joy using this property at all.
  • By default, the Barcode detection setting is enabled. This displays a yellow box when a possible scan area is detected and also a red line to indicate when a barcode value is read successfully. This setting is designed to assist when scanning, but I have noticed it can be erratic from time to time. It can be disabled safely without reducing the scanning functionality too much; you just won’t know straight away when you have scanned something successfully.
  • As the control is reliant on individual device camera privacy settings, you will need to ensure that any corporate device policies do not block the usage of the camera whilst using PowerApps and, in addition, you will need to give explicit camera access permission when launching the app for the first time. I did encounter a strange issue on iOS where the camera did not load after granting this permission. A device reboot seemed to resolve the issue.

Conclusions or Wot I Think

I am a strong believer in providing fast and effective resolution of any potential business requirement or concern. Typically, those working within an IT department will not be directly responsible for generating revenue or value for the business beyond IT support. This is what makes an attack-focused, proactive and value generating outlook an essential requirement, in my view, to anyone working in an IT-focused role. In tandem with this approach, having the tools at your disposal that will aid you in adhering to these tenents are ones that should be adopted readily and, after prudent assessment (i.e. balanced and not time costly), without fear. PowerApps, and the use cases hinted towards within this post, scores very highly on my list of essential tools in fostering this approach. It really ensures that typical “Power” users within an organisation can develop quick and easy solutions, with lack of previous experience not necessarily being a burden. In my own case, having had a firm background using Microsoft stack solutions in the past meant that my initial journey with PowerApps was that much easier. Ultimately, I think, PowerApps aims to save time and reduce the business concerns that arise from bloated software deployments, putting a much more business-focused onus on individuals to develop valuable solutions within a particular department or organisation.