Back
Featured image of post Exam PL-400 Revision Notes: Working with the Microsoft Dataverse Web API and Processing Workloads

Exam PL-400 Revision Notes: Working with the Microsoft Dataverse Web API and Processing Workloads

Welcome to the seventeenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last week, we took a deep dive look into custom connectors, a powerful feature within Power Automate and canvas Power Apps that allows us to expose our bespoke APIs for consumption within these services. Today, we finish off our discussion on the Extend the platform area of the exam by discussing how we can Use platform APIs and Process workloads. For these subjects, Microsoft expects candidates to demonstrate knowledge of the following:

Use platform APIs

  • interact with data and processes by using the Dataverse Web API or the Organization Service
  • implement API limit retry policies
  • optimize for performance, concurrency, transactions, and batching
  • query the Discovery service to discover the URL and other information for an organization
  • perform entity metadata operations with the Web API
  • perform authentication by using OAuth

Process workloads

  • process long-running operations by using Azure Functions
  • configure scheduled and event-driven function triggers in Azure Functions
  • authenticate to the Power Platform by using managed identities

When we refer to the Web API, we mean the one offered out by Microsoft Dataverse, which allows developers to execute various operations targeting table rows, organisational settings or different customisation settings. As a powerful tool in a developer’s arsenal when performing integrations, having a good understanding of the API is beneficial not just for exam PL-400 but also more generally. With this in mind, let’s look at what it is and the types of things we can do with it.

As with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.

Web API Overview

There will always be situations where you need to perform complex integration activities involving Microsoft Dataverse. Wherever possible, you should be considering how to use tools such as Power Automate flows to achieve these needs, as they provide a much simpler, streamlined mechanism for accommodating what may, at first look, appear to be a problematic integration between two systems. Despite the capabilities in Power Automate flows, they do have their limits. For example, if you’re developing a bespoke .NET application that needs to update data into the system, having to pass this through a flow can make your solution unnecessarily verbose. Also, this type of implementation has several security concerns, as you’d need to expose an HTTP endpoint with a simplified security layer. There may also be situations where you need to detect various metadata properties from your Microsoft Dataverse tables so that you can replicate these into an external system that’s synchronising data. For this situation, and for where a Power Automate flow just won’t cut it, the Web API represents our most optimal route for interacting with the application.

The main benefit of the Web API is that it utilises open standards - namely, the Open Data Protocol (OData) version 4.0 standard - that supports a variety of different programming languages or tools. Developers can use it to perform RESTful API requests, using operations that developers should commonly understand. What’s more, working with the Web API does not require a complicated set of integrated development environments (IDE’s); indeed, we can run many Web API requests using just a modern internet browser. The Web API supports pretty much every type of CRUD (Create, Update or Delete) operation and, where necessary, requests conform to the established security model that Microsoft Dataverse provides us. For example, if the user calling the API does not have delete permissions on the Contact table, then this operation will be blocked if attempted. In short, the Web API is a powerful tool at your disposal when performing deep integrations between external systems or in meeting requirements that you cannot achieve via other features available within the Power Platform.

Handling Authentication

As with any Software as a Service (SaaS) solution, developers will need to provide a valid set of authentication credentials if they wish to work with the Web API. Much like how we access the application via the user interface, Microsoft controls Web API access via Azure Active Directory (AAD) and, more accurately, through the OAuth 2.0 protocol. As such, developers have flexible means to authenticate into the Web API, which can also satisfy any security concerns. All requests going into the Web API require a valid access token generated from AAD. To create this, developers must do a few things within their AAD tenant, including setting up an App Registration. From there, additional configuration may be required, depending on your scenario:

Microsoft provides several different Azure Active Directory Authentication Libraries (ADAL) client libraries, which cover various platforms and provide a streamlined mechanism of generating the appropriate access tokens for authenticating. Use these to your advantage.

Demo: Authenticating into the Microsoft Dataverse Web API

To better understand the steps involved when authenticating into the Web API using the implicit grant flow, check out the video below, where I take you through each stage:

Using the Discovery URL

For most Microsoft Dataverse deployments, an organisation will typically have multiple environments set up on the same tenant, all of which meet specific purposes - perhaps one environment for testing, another for production and a backup of production for upgrade testing. Developers, therefore, sometimes need to inspect and determine the correct environment details that they want to connect to. Also, it may be desirable for your bespoke application to automatically provide the list of all available environments to a user for selection, based on giving user credentials instead of a valid URL that the user may not know.

To address both of these needs, Microsoft provides developers with two specific Discovery URL’s that we can use to interrogate details about all instances that the authenticated user can access. The first of these is the global discovery URL. This URL is the recommended one that developers should be using from March 2020 onwards, mainly because, if you have a multi-region deployment of Microsoft Dataverse, details of these instances will also be returned. The URL for this is as follows:

https://globaldisco.crm.dynamics.com/api/discovery/v2.0/Instances

By using a valid access token and accessing this endpoint via a GET request, you can list full details for all instances the user account has access to. And, because it is an OData endpoint, we can execute specific queries to return just the information we need.

The second URL available is known as the regional discovery URL. As its name indicates, these are scoped to a specific Microsoft Dataverse geographic location and, as such, will only return details of organisations scoped to that particular region. Other than that, the endpoint is similar to the global one but does return a reduced subset of data. In March 2020, Microsoft announced that this URL is now deprecated and has now been removed entirely.

Demo: Working with the Microsoft Dataverse Global Discovery Endpoint

In this next video, I’ll show you how to work with the Global Discovery URL to return information relating to your Microsoft Dataverse instances:

Working with the Web API

After discovering the URL for your organisation, you can then determine the precise Web API endpoint that will accept your requests. The format of this URL will generally resemble the following:

https://<Org Name>.api.<Region>.dynamics.com/api/data/v9.1/

Microsoft provides some further details that may assist you in constructing this for your specific scenario.

Much like the discovery URL’s, the Web API is a fully compliant OData v4 endpoint, providing us with a great degree of flexibility in terms of the types of requests it will accept. For example, the following URL as a GET request would return us the top 25 Account rows in the system, ordered by the Created On attribute:

https://<Org Name>.api.<Region>.dynamics.com/api/data/v9.1/accounts?$top=25&$orderby=createdon

Having a good awareness of all possible types of operations supported by the Web API will be essential if you plan to take the PL-400 exam. So let’s explore some of the supported operation types:

  • Creating Rows: Adding new rows of any potential table type that the user calling the API has permission to is supported. We can also create related table rows as part of the same operation, enforce any relevant duplicate detection rules and return full details of any new table row to the caller.
  • Retrieving Rows: As we have seen already, we can return multiple different table rows, but also specific ones as well. We can even enable some valuable options, such as returning formatted values for choices/lookups, extended properties from related rows (such as a specific attribute) or return data based on an alternate key value.
  • Update or Delete Rows: These do pretty much as you’d expect them to, but you have some convenient options available that allow you to update or remove the value of a single attribute. Also, you can perform Upsert operations as well, i.e. insert the row if it does not exist; otherwise, update it.
  • Perform Association/Disassociation Actions: Rather than performing an Update operation, you can instead call specific methods on the API to create or remove table relationships.
  • Merge Rows: This action allows us to merge Account, Contact, Case, and Lead rows via the Web API, using the same mechanism available within the user interface. Other table types are not supported.
  • Call Functions: Microsoft Dataverse exposes a range of different bound and unbound functions, such as the CalculateRollupField function, most of which can be called directly from the Web API.
  • Call Actions: We touched upon Actions briefly before in the series when looking at plug-ins. Similarly to Functions, we can call bound and unbound actions through the Web API.

These provide a summary of the most common types of operations you might need to perform. It is also possible to carry out user impersonation, detect duplicate data, retrieve data from predefined table views and - as we will see shortly - perform batch operations too. Developers can also use the Web API to detect various metadata properties for tables, columns and relationships, which we’ll touch upon in a second. All this detail might seem like a lot to take in for the exam itself but, considering that this portion of the exam will work out to be approximately 5% of what you are ultimate assessed on, don’t worry yourself too much studying the minute detail of each operation type.

Demo: Performing Common Operations with the Microsoft Dataverse API

To get a good flavour on how to work with the Web API to perform core operations and also retrieve table metadata, check out the video below:

Using the Web API for Batch Operations

For those coming from a SQL database background, you will undoubtedly be familiar with the concept of atomicity when executing a set of SQL statements. For example, let’s assume we have 3 Transact-SQL (T-SQL) statements - 2 INSERT statements and 1 UPDATE. All three of these scripts must complete successfully when run; otherwise, we could leave our database in an undesirable state. Now, we could wrap around some logic that says, deletes the 2 INSERTed records if our UPDATE statement fails, but this is like using a sledgehammer to crack a walnut. Instead, we can wrap all of the queries into a single transaction. If one statement fails, then all potential changes are rolled back, and we return to the same state the database was in before our SQL statements even touched our database. This occurrence is, in effect, an atomic transaction, a set of operations that must all succeed in unison or fail together.

Because Microsoft Dataverse uses SQL Server as its underlying database, transactions apply in many situations. We saw this already when looking at the event pipeline and, in particular, how records return to their previous state if a plug-in hits an error for whatever reason. To help us control how the platform performs multiple actions within a transaction, the Web API exposes the capability to run batch operations, often referred to as a changeset request. These are purely atomic in their nature and execution, allowing developers to execute up to 1000 related requests as part of the same transaction. And, if anything goes wrong as part of this, the database reverts to its previous state. The construction of a batch request must conform to a specific format (a type of text document) and be sent across to the following URL:

https://<Org Name>.api.<Region>.dynamics.com/api/data/v9.1/$batch

Pretty much all of the operations discussed previously are supported as part of a batch request, and they also support some useful features such as parameters. For situations where you need to process a complex set of changes from an external system, they are a useful feature to consider using.

Demo: Performing Batch Operations using the Web API

Given the complexity around constructing batch operations, the last video below goes through this subject in detail, explaining how to build a simple changeset request and execute this against the Web API:

General Performance Tips

When working with any API, it is always essential to model your requests to perform optimally. The Microsoft Dataverse Web API is no different in this regard, so keep in mind the following as you start working with it:

  • Take advantage of the various OData query parameters to limit the number of rows returned when performing retrievals. Where possible, use a $filter parameter and restrict the number of rows returned via the $top or $skip options to page large result sets. You should also use $select query parameters to bring back only the columns you need.
  • When attempting to expand related table rows via a $expand request, keep in mind the performance impact this has and the hard limit of 10 imposed by the platform.
  • API requests are subject to the throttling and request limits imposed by the platform more generally. Be sure to read and understand the relevant Docs articles, explaining both the request limits/allocations and service protection limits that Microsoft imposes at the tenant level. These will typically be dictated based on the number of licenses you have purchased on the Microsoft 365 portal.
  • There are several different Header values that you should always include as part of your requests, which I’ve outlined below. Doing this helps to prevent any ambiguity, should new versions of the endpoint be released in future:
    • Accept: application/json
    • OData-Version: 4.0
    • OData-MaxVersion: 4.0
  • As we’ve seen previously in the blog, when running client-side JavaScript from model-driven app forms, a specific method is exposed out to allow us to execute Web API requests. You should always use this method in this context and never perform the types of operations described in this article from client-side JavaScript/Typescript form functions.

Using Azure Functions with Microsoft Dataverse

As we saw when we looked at plug-ins earlier in the series, we have some helpful capability available there that can help us to execute code targeting the Dataverse platform and, potentially, perform batch operations. However, as noted in this post, the main limitation with plug-ins has always been the fact they execute in sandbox mode and the 2 minute maximum execution time. This restriction makes plug-ins impractical to utilise as part of long-running operations that may need to perform against the platform. As such, we must consider using other tools instead. In these situations, we can leverage some of the capability available to us in Microsoft Azure and, specifically, Azure Functions to help us out.

Azure Functions are designed to help us run small pieces of code (“Functions”) that runs within serverless environments on Azure. They have numerous benefits, including:

For long-running batch operations, they are an ideal candidate to consider. Keep in mind that, to use them to work with the various Microsoft Dataverse SDK libraries leveraging C#, we need to ensure that we run our Function using the .NET Framework stack and using Function version 1. Otherwise, you may find it challenging to write code that you would be familiar with from a plug-in standpoint 😉

And with that, we come to the end of the Extend the platform area of the PL-400 exam. Next time, we will start diving into some of the integration options available within the platform as we review how to publish and consume events from Microsoft Dataverse.

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy