Featured image of post Exam MB-400 Revision Notes: Building, Deploying & Debugging Plug-ins using C#

Exam MB-400 Revision Notes: Building, Deploying & Debugging Plug-ins using C#

Welcome to the eleventh post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. For those following the series, apologies again for the mini-hiatus. In fact, it’s been so long that Microsoft has now announced that MB-400 will be going the way of the Dodo at the end of the year. Despite this, I will be continuing and, all being well, finishing off this blog series for the following reasons:

  • People may still choose to sit the current exam while they still can. For this reason, the series may yet have some value once completed entirely.
  • Looking at the Skills Measured document for the new replacement PL-400: Microsoft Power Platform Developer exam, and there is a lot of crossover in content. Therefore, the existing posts and accompanying videos may still have some merit in the new state of play. I’ll be able to make an appropriate determination of this after Microsoft release PL-400 and I’ve been able to sit it. But, hopefully, some of this content will remain useful for at least another year or so.
  • I set out to run this series to completion, no matter what it takes 🙂

So with this in mind, I hope you continue to stick around for future posts and that they provide you with a useful tool as part of any revision or learning you are doing.

Last time around in the series, we took a deep-dive look into command buttons, as well as demonstrating how valuable the Ribbon Workbench tool is in helping us to fine-tune aspects of a model-driven Power App’s interface. This topic rounded off our discussion of the Extend the user experience area of the exam, meaning that we now move into the Extend the platform section. This area has equal weighting to Extend the user experience (15-20%), and the first topic concerns how we Create a plug-in. Specifically, candidates must demonstrate knowledge of the following:

Create a plug-in

  • debug and troubleshoot a plug-in
  • develop a plug-in
  • use the global Discovery Service endpoint
  • optimize plug-ins for performance
  • register custom assemblies by using the Plug-in Registration Tool
  • create custom actions

Plug-ins have been a mainstay within Dynamics 365, and its predecessor Dynamics CRM, for well over a decade now. Longstanding developers are, therefore, probably going to be well-versed in how to build them, but it’s always useful to get a refresher of old topics. With that in mind, let’s dive in!

As with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. I would also recommend that you have a good, general knowledge of how to work with the C# programming language before approaching plug-in development for the first time. Microsoft has published a whole range of introductory material to help you learn the language quickly.

What is a Plug-in?

In the series so far, we’ve already touched upon the following, functional tools that developers may leverage when dealing with complex business requirements:

  • Business Rules: These provide an excellent mechanism for handling simple logic, targeting both model-driven app forms and also platform-level operations.
  • Power Automate Flows: Using this, you can typically go the extra mile when compared with classic workflows, thereby allowing you to integrate multiple systems when processing asynchronous actions.
  • JavaScript: For situations where Business Rules can’t meet your particular requirement, and you need specific logic to trigger as a user is working on a model-driven form, this is the best tool in your arsenal.

All of these are great, but there may be situations where they are unsuitable, due to the sheer complexity of the business logic you are trying to implement. Also, some of the above tools do not support the ability to carry out synchronous actions (i.e. ones that happen straight away, as the user creates, updates, deletes etc. records in the application). Finally, it may be that you need to work with some specific elements of the SDK that are not exposed out straightforwardly via alternative routes. In these situations, a custom-authored plug-in is the only solution you can turn too.

So what are they then? Plug-ins allow developers to write either C# or VB.NET code, that is then registered within the application as a .NET Framework class library (DLL) and executed based on specific conditions you specify within its configuration. For example, when a user creates a Contact, retrieve the details of the parent Account record and update the Contact so that these details match. Developers will use Visual Studio when authoring a plug-in, typically using a class library project template. The Dynamics 365 SDK provides several modules that expose out the variety of different operations that a plug-in can support. Some of the other things that plug-ins support include:

  • Both synchronous and asynchronous execution.
  • Custom un-secure/secure configuration properties that can modify the behaviour of a plug-in at runtime. For example, by providing credentials to a specific environment to access.
  • Pre and Post Entity Images, snapshots of how a record and its related attributes looked before and after a platform level operation occurs.
  • For specific operations, such as Update, plug-ins can also support filtering attributes - basically, a list of defined fields that, when modified, will cause the plug-in to trigger.
  • Being able to specify the execution order for one or multiple plug-in steps. This capability can be useful when you need to ensure a set of steps execute in your desired order.

These days, thanks to tools such as Business Rules and Power Automate, we see the lessening importance of plug-ins and, typically, you would want to avoid considering their usage straight out the gate. However, they still have a place and, when used appropriately, become the only mechanism you can resort to when working with complicated business processes.

Understanding Messages, the Execution Pipeline & Steps

Before we start diving into building a plug-in for the first time, it is useful to provide an overview of the three core concepts that every plug-in developer needs to know:

  • Messages: These define a specific, platform level operation that the SDK exposes out. Some of the more commonly used Messages within the application include CreateUpdate or Delete. Some system entities may have their own set of unique Messages; CalculatePrice is an excellent example of this. From a plug-in perspective, developers essentially “piggyback” onto these operations as they are performed and inject their custom code. The following Microsoft Docs provides a detailed list of all supported messages and their corresponding entities and is well worth a read as part of preparing for the exam.
  • Execution Pipeline: As a user triggers a particular message, the application processes it using a defined set of stages, known as the execution pipeline. These are discussed in detail on this Microsoft Docs article, but the key ones we need to be aware of are:
    • PreValidation: At this stage, the database transaction has not started. Also, the application has not yet performed any appropriate security checks; potentially meaning that the operation may fail if the user does not have the correct security privileges.
    • PostValidation: Here, the database transaction has already started. The platform knows at this stage that no security constraints are preventing the operation from completing, but the transaction may still fail for other reasons.
    • PostOperation: Although the database transaction has still not completed at this stage, the core operation of the message (executed within the MainOperation stage) will have completed by this stage.
  • A failure at any of these stages will cause the database transaction to rollback entirely, returning the record(s) to their original state. Now, from a plug-in perspective, the execution pipeline is exposed out to developers as stages where your custom code can execute. This can provide developers with a high degree of flexibility and capability when it comes to running their custom code. As a general rule of thumb, developers would use each of these stages under the following circumstances:
    • PreValidation: Use this stage to perform checks to cancel the operation.
    • PostValidation: This stage is useful for when you need to modify any values, based on some kind of business logic. Doing this at this stage would also prevent triggering another platform Message.
    • PostOperation: Use this stage for when you need to carry out additional logic not related to the current record or potentially provide additional information back to the caller after the platform completes the core operation.
  • Developers will typically need to give some thought towards the execution pipeline and the most appropriate one to select, based on the requirements being worked with.
  • Steps: Simply writing a plug-in class and deploying out the library is not sufficient to trigger your custom logic. Developers must additionally provide a set of “instructions”, commonly known as Steps, that tells the application when and where to execute your custom code. It is here where both the Message and Execution Pipeline come into play, and you would always specify this information when creating a Step. Additional info you can include here includes the execution order, whether the plug-in will execute synchronously or not and the display name of Step when viewed within the application.

We will see shortly how these topics come into play, as part of using the Plug-in Registration Tool.

Building Your First Plug-In: Pre-Requisites

Before you start thinking about building your very first plug-in, you need to make sure that you have a few things installed onto your development machine:

  • Visual Studio: I would generally recommend using either Visual Studio 2017 or 2019 when developing for Dynamics 365 online. If you don’t have a Visual Studio/MSDN subscription, then the Community Edition can be used instead.
  • A Dynamics 365 / Common Data Service Tenant: Because where else are you going to deploy out and test your plug-in? 🙂
  • Knowledge of C#  / VB.NET: Attempting to write a plug-in for the first time without at least a basic grasp of one of these languages will impede your progress.
  • Plug-in Registration Tool: To deploy your plug-in out, you will need access to this tool as well. We will cover this off later in the post.

Demo: Creating a Basic Plug-in using Visual Studio 2019 & C#

The best way to learn how to create a plug-in is to see someone build one from scratch. In the YouTube video below, I talk through how to build a straightforward plug-in using Visual Studio 2019 and C#:

For those who would prefer to read a set of instructions, then this Microsoft Docs article provides a separate tutorial you can follow instead.

Using the Plug-in Registration Tool

Once you’ve written your first plug-in, you then need to consider how to deploy this out. In most cases, you will use the Plug-in Registration Tool to accomplish this. Available on NuGet, this lightweight application supports the following features:

  • Varied access options for when working with multiple Dynamics 365 environments, both online and on-premise.
  • The ability to register one or multiple plug-in assemblies.
  • The registering of plug-in steps and images, including the various settings, discussed earlier.
  • Via the tool, you can install the Plug-in Profiler, an essential tool when it comes to remote debugging your plug-ins; more on this capability later.

The Plug-In Registration Tool is also necessary for when you are deploying out other extensibility components, including Service Endpoints, Web Hooks or Custom Data Providers. We will touch upon some of these later on in the series. For this topic area and the exam, you must have a good general awareness of how to deploy and update existing plug-in assemblies.

Demo: Deploying a Basic Plug-in using the Plug-in Registration Tool

In this next video, I’ll show you how to take the plug-in developed as part of the previous video and deploy it out using the Plug-in Registration Tool:

There is also a corresponding Microsoft Docs tutorial that covers off these steps too.

Debugging Options for Plug-ins

Plug-ins deployed out into Dynamics 365 must always run within sandbox execution mode. As a result, this imposes a couple of limitations (some of which I’ll highlight in detail later on), the main one being that it greatly hinders the ability to debug deployed plug-ins easily. To get around this, Microsoft has provided two mechanisms that developers can leverage:

  • Plug-in Trace Logging: Using this, developers can write out custom log messages into the application, at any point in their code. This can be useful in identifying the precise location where a plug-in is not working as expected, as you can output specific values to the log for further inspection. You can also utilise them to provide more accurate error messages for your code that you would not necessarily wish to show users as part of a dialog box. Getting to grips with trace logging is easy - it’s just a few lines of code that you need to add to your project - and you can find out more about how to get started with it on the Microsoft Docs site.
  • Plug-in Registration Tool & Profiling: While trace logging is undoubtedly useful, there will be situations where you need something more. For example, it may become desirable to breakpoint code, inspect values/properties as they are processed through and determine when your plug-in hits specific conditions or error messages. For these situations, the Profiler comes into play, by allowing developers to “playback” their code execution using Visual Studio and the Plug-in Registration Tool. We mentioned the Plug-in Profiler tool earlier, which is a mandatory requirement as part of this and must be deployed out to your instance first. From there, you can generate a profile file that allows you to rewind execution within Visual Studio.

Developers will typically use both of these debugging methods in tandem when trying to figure out issues with their code. The first option provides a friendly, unobtrusive mechanism of inspecting how a plug-in has got on, with the Profiler acting as the nuclear option when the problem cannot be discerned easily via trace logging alone. Understanding the benefits/disadvantages of both and how to use them will be essential as part of your exam preparation. With this in mind, check out the two demo videos below that show you how to work with these features in-depth:

Demo: Debugging a Basic Plug-in using Trace Logging

Demo: Debugging a Basic Plug-in Using the Plug-in Registration Tool

General Performance / Optimization Tips

Anyone can build and deploy a plug-in, but it can take some time before you can do this well. Here are a few tips that you should always follow to ensure your plug-ins perform well when deployed out:

  • Always keep in mind some of the limitations as part of sandbox execution for your plug-ins, including:
    • The 2 minute limit on execution time.
    • Restriction on the use of specific 3rd party DLL’s, such as Newtonsoft.Json
    • Various restrictions around accessing operating system level information, such as directories, system state etc.
  • For situations where sandbox limitation will cause issues in executing your business logic, you will need to consider moving away from a plug-in and adopting another solution.
  • Filtering attributes provide a great way of ensuring your code only executes when you need it to. You should always use these wherever possible.
  • Make sure to disable any plug-in profiling and remove the Profiler solution once you are finished. Active profiles can considerably slow down performance, and the Profile solution can also introduce unintended dependencies on core entities, causing difficulties when moving changes as part of a solution file.
  • The Solution Checker provides an excellent mechanism for quality checking your code and can give some constructive recommendations on where your plug-in can be improved. You should always run this at least once before moving your plug-in out into other environments.
  • Read through the following Microsoft Docs article and take care to follow the suggestions it outlines.

Don’t Forget…Custom Actions & Global Discovery Service Endpoint

As Microsoft has included them on the exam specification, it’s worth talking about these two topics briefly. However, only a general awareness of them should be sufficient, and I wouldn’t devote too much of your revision time towards them.

We’ve already looked at Messages in-depth and seen how they can act as a “gateway” for developers to bolt on specific logic when an application-level event occurs. Custom Actions allow you to take this a step further, by creating SDK/API exposable, custom Messages. For example, you could combine the Create/Update Messages of the Lead/Opportunity entities into a new message called SalesProgress. Plug-ins or operations targeting the Web API could then work with or trigger actions based on this. Custom Actions have been available within the application for many years now, and many developers continually argue them as being one of the most underrated features in Dynamics 365. They are also fully supported for use as part of Power Automate flows too. In short, they can be incredibly useful if you need to group multiple default messages into a single action, that can then be called instead of each one.

Finally, it is worth touching upon how developers use the Global Discovery Endpoint from a plug-in standpoint, but we must first touch upon the concepts of early-binding and late binding. In the video demos above, I wrote all of the code out using the late-binding mechanism. What this means is that, instead of declaring an object for the specific entity I wanted to work with (such as Contact), I instead used the generic Entity class and told the code which entity it’s supposed to represent. Which is fine, and gives me some degree of flexibility…but it does ultimately mean that I won’t detect any issues with my code (such as an incorrect field name) until runtime. Also, I have no easy way to tell how my entity looks using Intellisense; instead, I must continuously refer back to the application to view these properties. To get around this, we can use a mechanism known as early-binding, where all of the appropriate entity structures are generated within the project and referenced accordingly. The CrmSvcUtil.exe application provides a streamlined means of creating these early-bound classes and, as you might have guessed, you need to use the Global Discovery Endpoint to generate these classes successfully. There have been many previous wars and mounds of dead developers debates online regarding which mechanism is better. While early-binding does afford some great benefits, it does add some overhead into your development cycle, as you must continuously run the CrmSvcUtil application each time your entities change within Dynamics 365. All I would recommend is to try both options, identify the one that works best for you and, most importantly, adopt your preferred solution consistently.

Plug-ins can help when modelling out complicated business logic, that is impossible to achieve via other functional tools within the Power Platform. I hope this post has provided a good insight into their capabilities and to help you in your revision. In the next post, I’ll show you how you can extend Power Automate and Power Apps via custom API connectors, thereby allowing you to connect to a myriad of different systems.

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy