It’s sometimes useful to determine the name of the user account that executes a Workflow within Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE). What can make this a somewhat fiendish task to accomplish is the default behaviour within the application, which exposes very little contextual information each time a Workflow is triggered. Take, for example, the following simplistic Workflow which creates an associated Note record whenever a new Lead record is created:

The Note record is set to be populated with the default values available to us regarding the Workflow execution session – Activity Count, Activity Count including Process and Execution Time:

We can verify that this Workflow works – and view the exact values of these details – by creating a new Lead record and refreshing the record page:

The Execution Time field is somewhat useful, but the Activity Count Activity Count including Process values relate to Workflow execution sessions and are only arguably useful for diagnostic review – not something that end users of the application will generally be interested in 🙂

Going back to the opening sentence of this post, if we were wanting to develop this example further to include the Name of the user who executed the Workflow in the note, we would have to look at deploying a Custom Workflow Assembly to extract the information out. The IWorkflowContext Interface is a veritable treasure trove of information that can be exposed to developers to retrieve not just the name of the user who triggers a Workflow, but the time when the corresponding system job was created, the Business Unit it is being executed within and information to determine whether the Workflow was triggered by a parent. There are three steps involved in deploying out custom code into the application for utilisation in this manner:

  1. Develop a CodeActivity C# class file that performs the desired functionality.
  2. Deploy the compiled .dll file into the application via the Plugin Registration Tool.
  3. Modify the existing Workflow to include a step that accesses the custom Workflow Activity.

All of these steps will require ready access to Visual Studio, a C# class plugin project (either a new one or existing) and the CRM SDK that corresponds to your version for the application.

Developing the Class File

To begin with, make sure your project includes References to the following Frameworks:

  • System.Activities
  • Microsoft.Xrm.Sdk
  • Microsoft.Xrm.Sdk.Workflow

Add a new Class (.cs) file to your project and copy & paste the below code, overwriting any existing code in the window. Be sure to update the namespace value to reflect your project name:

using System.Activities;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Workflow;

namespace D365.Demo.Plugins
{
    public class GetWorkflowInitiatingUser : CodeActivity
    {
        protected override void Execute(CodeActivityContext executionContext)
        {
            IWorkflowContext workflowContext = executionContext.GetExtension<IWorkflowContext>();
            CurrentUser.Set(executionContext, new EntityReference("systemuser", workflowContext.InitiatingUserId));
        }

        [Output("Current User")]
        [ReferenceTarget("systemuser")]
        public OutArgument<EntityReference> CurrentUser { get; set; }
    }
}

Right-click your project and select Build. Verify that no errors are generated and, if so, then that’s the first step done and dusted 🙂

Deploy to CRM/D365CE

Open up the Plugin Registration Tool and connect to your desired instance. If you are deploying an existing, updated plugin class, then right-click it on the list of Registered Plugins & Custom Workflow Activities and click Update; otherwise, select Register -> Register New Assembly. The same window opens in any event. Load the newly built assembly from your project (can be located in the \bin\Debug\ folder by default) and ensure the Workflow Activity entry is ticked before selecting Register Selected Plugins:

After registration, the Workflow Activity becomes available for use within the application; so time to return to the Workflow we created earlier!

Adding the Custom Workflow Activity to a Process

By deactivating the Workflow Default Process Values Example Workflow and selecting Add Step, we can verify that the Custom Workflow Assembly is available for use:

Select the above, making sure first of all that the option Insert Before Step is toggled (to ensure it appears before the already configured Create Note for Lead step). It should look similar to the below if done correctly:

Now, when we go and edit the Create Note for Lead step, we will see a new option under Local Values which, when selected, bring up a whole range of different fields that correspond to fields from the User Entity. Modify the text within the Note to retrieve the Full Name value and save it onto the Note record, as indicated below:

After saving and reactivating the Workflow, we can verify its working by again creating a new Lead record and refreshing to review the Note text:

All working as expected!

The example shown in this post has very limited usefulness in a practical business scenario, but could be useful in different circumstances:

  • If your Workflow contains branching logic, then you can test to see if a Workflow has executed by a specific user and then perform bespoke logic based on this value.
  • Records can be assigned to other users/teams, based on who has triggered the Workflow.
  • User activity could be recorded in a separate entity for benchmarking/monitoring purposes.

It’s useful to know as well that the same kind of functionality can also be deployed when working with plugins as well in the application. We will take a look at how this works as part of next week’s blog post.

Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE) is an incredibly flexible application for the most part. Regardless of how your business operates, you can generally tailor the system to suit your requirements and extend it to your heart’s content; often to the point where it is completely unrecognisable from the base application. Notwithstanding this argument, you will come across aspects of the application that are (literally) hard-coded to behave a certain way and cannot be straightforwardly overridden via the application interface. The most recognisable example of this is the Lead Qualification process. You are heavily restricted in how this piece of functionality acts by default but, thankfully, there are ways in which it can be modified if you are comfortable working with C#, JScript and Ribbon development.

Before we can start to look at options for tailoring the Lead Qualification process, it is important to understand what occurs during the default action within the application. In developer-speak, this is generally referred to as the QualifyLead message and most typically executes when you click the button below on the Lead form:

When called by default, the following occurs:

  • The Status/Status Reason of the Lead is changed to Qualified, making the record inactive and read-only.
  • A new OpportunityContact and Account record is created and populated with (some) of the details entered on the Lead record. For example, the Contact record will have a First Name/Last Name value supplied on the preceding Lead record.
  • You are automatically redirected to the newly created Opportunity record.

This is all well and good if you are able to map your existing business processes to the application, but most organisations will typically differ from the applications B2B orientated focus. For example, if you are working within a B2C business process, creating an Account record may not make sense, given that this is typically used to represent a company/organisation. Or, conversely, you may want to jump straight from a Lead to a Quote record. Both of these scenarios would require bespoke development to accommodate currently within CRM/D365CE. This can be broadly categorised into two distinct pieces of work:

  1. Modify the QualifyLead message during its execution to force the desired record creation behaviour.
  2. Implement client-side logic to ensure that the user is redirected to the appropriate record after qualification.

The remaining sections of this post will demonstrate how you can go about achieving the above requirements in two different ways.

Our first step is to “intercept” the QualifyLead message at runtime and inject our own custom business logic instead

I have seen a few ways that this can be done. One way, demonstrated here by the always helpful Jason Lattimer, involves creating a custom JScript function and a button on the form to execute your desired logic. As part of this code, you can then specify your record creation preferences. A nice and effective solution, but one in its guise above will soon obsolete as a result of the SOAP endpoint deprecation. An alternative way is to instead deploy a simplistic C# plugin class that ensures your custom logic is obeyed across the application, and not just when you are working from within the Lead form (e.g. you could have a custom application that qualifies leads using the SDK). Heres how the code would look in practice:

public void Execute(IServiceProvider serviceProvider)
    {
        //Obtain the execution context from the service provider.

        IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));

        if (context.MessageName != "QualifyLead")
            return;

        //Get a reference to the Organization service.

        IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
        IOrganizationService service = factory.CreateOrganizationService(context.UserId);

        //Extract the tracing service for use in debugging sandboxed plug-ins

        ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

        tracingService.Trace("Input parameters before:");
        foreach (var item in context.InputParameters)
        {
            tracingService.Trace("{0}: {1}", item.Key, item.Value);
        }

        //Modify the below input parameters to suit your requirements.
        //In this example, only a Contact record will be created
        
        context.InputParameters["CreateContact"] = true;
        context.InputParameters["CreateAccount"] = false;
        context.InputParameters["CreateOpportunity"] = false;

        tracingService.Trace("Input parameters after:");
        foreach (var item in context.InputParameters)
        {
            tracingService.Trace("{0}: {1}", item.Key, item.Value);
        }
    }

To work correctly, you will need to ensure this is deployed out on the Pre-Operation stage, as by the time the message reaches the Post-Operation stage, you will be too late to modify the QualifyLead message.

The next challenge is to handle the redirect to your record of choice after Lead qualification

Jason’s code above handles this effectively, with a redirect after the QualifyLead request has completed successfully to the newly created Account (which can be tweaked to redirect to the Contact instead). The downside of the plugin approach is that this functionality is not supported. So, if you choose to disable the creation of an Opportunity record and then press the Qualify Lead button…nothing will happen. The record will qualify successfully (which you can confirm by refreshing the form) but you will then have to manually navigate to the record(s) that have been created.

The only way around this with the plugin approach is to look at implementing a similar solution to the above – a Web API request to retrieve your newly created Contact/Account record and then perform the necessary redirect to your chosen entity form:

function redirectOnQualify() {

    setTimeout(function(){
        
        var leadID = Xrm.Page.data.entity.getId();

        leadID = leadID.replace("{", "");
        leadID = leadID.replace("}", "");

        var req = new XMLHttpRequest();
        req.open("GET", Xrm.Page.context.getClientUrl() + "/api/data/v8.0/leads(" + leadID + ")?$select=_parentaccountid_value,_parentcontactid_value", true);
        req.setRequestHeader("OData-MaxVersion", "4.0");
        req.setRequestHeader("OData-Version", "4.0");
        req.setRequestHeader("Accept", "application/json");
        req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
        req.setRequestHeader("Prefer", "odata.include-annotations=\"OData.Community.Display.V1.FormattedValue\"");
        req.onreadystatechange = function () {
            if (this.readyState === 4) {
                req.onreadystatechange = null;
                if (this.status === 200) {
                    var result = JSON.parse(this.response);
                    
                    //Uncomment based on which record you which to redirect to.
                    //Currently, this will redirect to the newly created Account record
                    var accountID = result["_parentaccountid_value"];
                    Xrm.Utility.openEntityForm('account', accountID);

                    //var contactID = result["_parentcontactid_value"];
                    //Xrm.Utility.openEntityForm('contact', contactID);

                }
                else {
                    alert(this.statusText);
                }
            }
        };
        req.send();
        
    }, 6000);     
}

The code is set to execute the Web API call 6 seconds after the function triggers. This is to ensure adequate time for the QualifyLead request to finish and make the fields we need available for accessing.

To deploy out, we use the eternally useful Ribbon Workbench to access the existing Qualify Lead button and add on a custom command that will fire alongside the default one:

As this post has hopefully demonstrated, overcoming challenges within CRM/D365CE can often result in different – but no less preferred – approaches to achieve your desired outcome. Let me know in the comments below if you have found any other ways of modifying the default Lead Qualification process within the application.

The ability to implement trace logging within CRM plug-ins has been around since CRM 2011, so it’s something that CRM developers should be well aware of. Writing to the trace log is useful for when a plug-in has failed or hit an exception, as within the ErrorDetails.txt file (available to download from the error message box window) will be a list of everything that has been written to the log, up to that point. One issue with this is, if a user encounters an error and does not choose to download this file, then this file is lost – not so much of an issue if the exception can be re-produced, but this may not always the case.

For those who are now working on CRM Online 2015 Update 1 or CRM 2016, then a new feature has been added which further expands this feature – the Plug-in Trace Log. Now, plug-in exceptions can be configured to write to a new system entity, containing full details of the exception, that can be accessed at any time in order to support retroactive debugging. The introduction of this feature means now is the best time to start using trace logging within your plugins, if you are not already. This weeks blog post will take a look at the feature in more detail, assessing its pros & cons and providing an example of how it works in practice.

So, before we begin, why you would want to implement tracing in the first place?

Using tracing as part of CRM Online deployments makes sense, given that your options from a debugging point of view are restricted compared to an On-Premise Deployment. It is also, potentially, a lot more straightforward then using the Plug-in Registration Tool to debug your plugins after the event, particularly if you do not have ready access to the SDK or to Visual Studio. Tracing is also useful in providing a degree of debugging from within CRM itself, by posting either your own custom error messages or feeding actual error messages through to the tracing service.

Just remember the following…

Writing to the tracing service does add an extra step that your plug-in has to overcome. Not so much of an issue if your plugin is relatively small, but the longer it gets, and more frequent you are writing to the service, means there is a potential performance impact. You should use your best judgement when writing to the service; not every time you do something within the plugin, but where there is a potential for an error to occur. Writing to the tracing log can also have an impact on your CRM storage capacity, something we will take a look at later on in this post.

Now that we have got that out of the way, lets begin by setting up an example plugin! 

To start writing to the Tracing service depends on how you are implementing your plugin. If you have used the Visual Studio template, then simply use this line of code within the “TODO: Implement your custom Plug-In business logic.” section:

ITracingService tracingService = localContext.TracingService;

Otherwise, you will need to ensure that your plugin is calling the IServiceProvider, and then use a slightly longer code snippet to implement the service. An example of the code that you’d need to use to setup this is as follows:

using System;

//Be sure to add references to Microsoft.Xrm.Sdk in order to use this namespace!

using Microsoft.Xrm.Sdk;

namespace MyPluginProject
{
    public class MyPlugin : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            //Extract the tracing service for use in debugging sandboxed plug-ins.
            
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
        }
    }
}

Once you’ve implemented the ITracingService within your code, you can then write to Trace Log at any time in your code using the following snippet:

tracingService.Trace("Insert your text here...");

Activating Tracing

Even though we have configured our plugin for tracing, this does not automatically mean that our plugin will start writing to the log. First, we must configure the Plug-in and custom workflow activity tracing setting within the System Settings page:

TracingSettings

You have three options that you can set:

  • Off – Nothing will be written to the trace log, even if the plugin encounters an exception.
  • Exception – When the plugin hits an exception, then a trace will be written to the log.
  • All – Whenever the plugin is executed and the trace log is called, then a trace log record will be created. This is equivalent to Verbose logging.

As mentioned earlier, individual records will be written to CRM whenever the tracing service is called. It is therefore recommended only to turn on ‘All’ for temporary periods; leaving it on for ‘Exception’ may be useful when attempting to initially diagnose plugin errors. Review the amount of storage available to you on your CRM Online/On-Premise deployment in order to determine the best course of action.

Tracing in Practice

Now that we’ve configured tracing on our CRM and we know how to use the Tracing, lets take a look at an example plugin. The below plugin will be set to fire on the Post-Operation event of the Update message on the Account entity. It will create a new contact record, associate this Contact record to the Account and then populate the Description field on the Contact with some information from the Account record:

protected void ExecutePostAccountUpdate(LocalPluginContext localContext)
    {
        if (localContext == null)
        {
            throw new ArgumentNullException("localContext");
        }

        //Extract the tracing service for use in debugging sandboxed plug-ins.

        ITracingService tracingService = localContext.TracingService;
        tracingService.Trace("Implemented tracing service succesfully!");

        // Obtain the execution context from the service provider.

        IPluginExecutionContext context = localContext.PluginExecutionContext;

        // Get a reference to the Organization service.

        IOrganizationService service = localContext.OrganizationService;

        if (context.InputParameters.Contains("Target"))

        {
            //Confirm that Target is actually an Entity

            if (context.InputParameters["Target"] is Entity)

            {
                Guid contactID;
                string phone;

                Entity account = (Entity)context.InputParameters["Target"];
                tracingService.Trace("Succesfully obtained Account record" + account.Id.ToString());

                try

                {
                    tracingService.Trace("Attempting to obtain Phone value...");
                    phone = account["telephone1"].ToString();
                            
                }
                    
                catch(Exception error)

                {
                    tracingService.Trace("Failed to obtain Phone field. Error Details: " + error.ToString());
                    throw new InvalidPluginExecutionException("A problem has occurred. Please press OK to continue using the application.");

                }

                if (phone != "")

                {

                    //Build our contact record to create.

                    Entity contact = new Entity("contact");

                    contact["firstname"] = "Ned";
                    contact["lastname"] = "Flanders";

                    contact["parentcustomerid"] = new EntityReference("account", account.Id);

                    contact["description"] = "Ned's work number is " + phone + ".";

                    contactID = service.Create(contact);

                    tracingService.Trace("Succesfully created Contact record " + contactID.ToString());

                    tracingService.Trace("Done!");

                }

                else

                {

                    tracingService.Trace("Phone number was empty, Contact record was not created.");

                }
            }
        }
    }

After registering our plugin and with tracing configured for “All” in our CRM instance, we can now see our custom messages are being written to the Trace Log – when we both update the A. Datum Corporation (sample) record Phone field to a new value and when we clear the field value:

Plugin_1

Plugin_2

Plugin_3

Most importantly, we can also see that our test Contact record is being created successfully when we populate the Phone field with data 🙂

 

Plugin_4

Now, to see what happens when an error is invoked, I have modified the code above so that it is expecting a field that doesn’t exist on the Account entity:

Plugin_5

Now, when we attempt to update our Account record, we receive a customised Business Process Error message and window:

Plugin_6

And we can also see that the precise error message has been written to the trace log, at the point we specified:

Plugin_7

Last, but not least, for On-Premise deployments…

One thing to point out is, if you are using On-Premise CRM 2016 (both 8.0 and 8.1), then for some reason the trace log will not work if you do not run you plugin within sandbox isolation mode. I’m not the only one experiencing this, according to this post on the Dynamics CRM Community forum. Switching my test plugin to sandbox isolation resolved this issue. A bit of a strange one, and as Srikanth mentions on the post, it is not clear if this a bug or not.

Conclusions or Wot I Think

Trace logging is one of those things where time and place matter greatly. Implementing them within your code obsessively does not return much benefit, and could actually be detrimental to your CRM deployment. Used prudently and effectively though, they can prove to be incredibly useful. The scenarios where I can see them returning the most benefit is if your plugin is making a call to an external system and, if an error is encountered during this process, you can use the Trace Log to capture and store the external application error message within CRM for further investigation. Trace logging can also prove useful in scenarios where an issue cannot be readily replicated within the system, by outputting error messages and the steps leading up to them within the Trace Log.

In summary, when used in conjunction with other debugging options available to you via the SDK, Trace Logging can be a powerful weapon to add to your arsenal when debugging your code issues in CRM.

As part of a recent project I have been involved in, one of the requirements was to facilitate a bulk data import process via a single import spreadsheet, which would then create several different CRM Entity records at once. I was already aware of the Bulk Data Import feature within CRM, and the ability to create pre-defined data maps via the GUI interface; what I wasn’t aware of was that there is also a means of creating the very same data maps via the SDK or an .xml import. This was a pleasant surprise to say the least and, from the looks of it online, there is very little in the way of resource available for what is, arguably, a very powerful out of the box feature within Dynamics CRM. Expect a future blog post from me that dives a bit deeper into this subject, as I think the capabilities of this tool could fit a variety of differing requirements; if only the instructions within the SDK were a bit more clearer when it comes to  working with the raw data map .xml!

Whilst attempting to perform a proof of concept test, I uploaded a data map .xml into CRM, which contained references to four different entities. When I then attempted to run the data import using a test file, I encountered a strange issue which meant I could not proceed with the import. This occurred when CRM attempted to map the record types specified as part of the data map:

DataMap_Error

Can you spot the two issues?

The first, more obvious, problem is the yellow triangle on the 3rd entity option and the fact that the Record Type is empty. Whereas the 3 other options will display clearly the Display Names of the entities (e.g. Account, Lead), the Display Name of the problematic entity is not visible at all.

The more eagle-eyed readers may also have spotted the second, more serious issue; the next button is grayed out, meaning we have no way in which to move to next step as part of the data import. With no means of figuring out, exactly, what may be causing the problem based on the above screen, you can potentially expect a long-haul investigation to commence.

As it turns out, the issue was both simple and frustrating in equal measures. It turns out that because the third entity on the list of the above had the exact same Display Name value as another entity within the system, CRM got confused and was unable to display and map with the correct entity. So, for example, lets say you create a custom entity to record information relating to houses, called “Property”, with a Name of “new_property”. Without perhaps realising, CRM already has a system entity called “Property”, with a Name of “dynamicproperty”. It doesn’t matter that the Names of the entities are completely different; so long as the Display Names are, then you will encounter the same issue as highlighted above. So, after renaming the entity concerned with a unique Display Name (don’t forget to publish!), I was able to proceed successfully through the data import wizard.

So why is this frustrating? If, like me, you have worked with CRM for some time, your first assumption would be that the system would always rely on the Name of objects when it comes to determining unique entities, attributes etc. The above appears to be the exception to the rule, and is frustrating for the fact that it throws out of the window any assumed experience when it comes to working under the hood of CRM. Perhaps it was (fairly) assumed that, because the logical names must always be unique for entities, that the data map feature could just use the Display Name as opposed to Name when attempting to map record types. I would expect that CRM is performing some kind of query behind the scene at the above stage of the Data Import wizard, and that the Display Name field value is included as parameter for all entities included in the data map. In summary, I would suggest that this is a minor bug, but something that would be difficult to encounter as part of regular CRM use.

Hopefully the above post will help prevent anyone else from spending 6+ hours from figuring out just what in the hell is going on 🙂 As a final side note, what this problem and solution demonstrates is an excellent best practice lesson to avoid; be sure to provide distinct Display Names to your custom entities.

The introduction of Excel Online within Dynamics CRM was one of those big, exciting moments within the applications history. For Excel heads globally, it provides a familiar interface in which CRM data can be consumed and modified, without need to take data completely off CRM in the process. It is also a feature that can easily be used by CRM Administrators in order to perform quick changes to CRM data. It’s also one of the benefits of using CRM Online over On-Premise, and probably something I should of included as part of my previous analysis on the subject.

As great as the feature is, like anything with CRM, it is subject to occasional issues in practice; particularly if you use bespoke security roles as opposed to the “out of the box” ones provided by Microsoft. We encountered an issue recently where one of our colleagues had a problem importing modified data from Excel Online back into CRM. Our colleague had no problem opening the data in Excel Online, with the problem only surfacing when they clicked the Save Changes to CRM button. The rather lovely looking error message looked something like this:

Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: Principal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)Detail:

<OrganizationServiceFault xmlns:i=”http://www.w3.org/2001/XMLSchema-instance” xmlns=”http://schemas.microsoft.com/xrm/2011/Contracts”>

<ErrorCode>-2147220960</ErrorCode>

<ErrorDetails xmlns:d2p1=”http://schemas.datacontract.org/2004/07/System.Collections.Generic” />

<Message>Principal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)</Message>

<Timestamp></Timestamp>

<InnerFault i:nil=”true” />

<TraceText i:nil=”true” />

</OrganizationServiceFault>

When approaching any type of error message for the first time, it can be quite daunting figuring out what it is saying. Fortunately, in this case, there is only one line we really need to be concerned about, which is the <Message>…</Message>. To translate to plain English, this line:

<Message>Principal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)</Message>

Means:

I cannot complete this action for Joe Bloggs, because they are missing the Import Source File Read privilege!

(Note: To help translate the above, I made use of the Security role UI to privilege mapping table on MSDN – a handy link to have in your browser favourites)

Giving the user just this privilege did not resolve the issue, producing a completely different error message in the process. Rather then spend an inordinate amount of time replicating the action over and over again, we did a quick Google search to see if there if we could find a list of the minimum level of permissions required in order to complete We were directed towards this forum post, with an answer from CRM MVP Jason Lattimer on what permissions were required in order to resolve the error message:

Required permissions:

* Data Import (all)
* Data Map (all)
* Import Source File (all)
* Web Wizard (all)
* Web Wizard Access Privilege (all)
* Wizard Page (all)

Problem solved you’d think? Well, unfortunately, in this case not. Although at first we thought that things were working fine, as no error message cropped up. When we then monitored the data import job in background, however, it was stuck at Parsing. At this point, we were really beginning to struggle to think of how to resolve the problem. It was at that point we rather desperately took a look at other, successfully completed, System Jobs to see if there was anything obvious we could observe. We noticed that successfully completed Data Import jobs had 3 System Job Tasks associated with them, whereas our stuck had only 1. At this stage, we asked: Could we be missing privileges on the System Job entity? And, lo and behold, when we took a look at the users security role, there were no privileges configured for System Jobs. After a bit more trial and error, adding permissions one by one onto this role, we saw that the Data Import job ran successfully!

So just to confirm for those who may encounter the same problem in future, the full list of permissions required to get Excel Online Data Import working successfully are the ones highlighted above and the following additional privileges too:

Customization Tab

System Job

Create: Business Unit Level

Read: Business Unit Level

Write: Business Unit Level

Append: Business Unit Level

Append To: Business Unit Level

Assign: Business Unit Level

e.g.

SystemJobMinimumPrivileges

What this problem (and the solution) I think demonstrates is the best way in which to approach day-to-day problems that may crop up within CRM:

  • It is reasonable to assume from the outset that the problem is due to a lack of security permissions problems; try not to over complicate matters early on by assuming it could be something completely different. For example, if the task or action that you are trying to perform CRM can be performed using an account/security role with greater permissions, then this will tell you straight away what the problem is.
  • Having good “under the hood” knowledge of CRM is always helpful in a scenario like this, but this may not always be possible for those of you work with CRM sparingly. To help you in this scenario, we can refer to some of the fantastic resources online by the CRM community. Ben Hosking has a great blog post on the importance of thinking in entities when it comes to working within CRM, something which I think applies great in this particular example. By understanding that the System Job is a system entity, we can logically assume that it has its own set of required permissions.
  • In diagnosing the issue in this case, we were able to refer to some of the previous System Jobs records in the system. As part of this, we observed that a System Job that completed successfully had 3 job records related to it. This immediately told us that there was something wrong with the users access to the entity in question (going back to the above, System Job is, after all, a system Entity). Sometimes, being able to understand the difference between an action, when it works and doesn’t work, can give you the information you need to make the logical next step jump on what to investigate further.
  • And, last but not least, access to and the ability to use a search engine is always very helpful 🙂