[{"content":"Oh boy. Three years without a single blog post, and this is what gets me back into the saddle? I will say from the outset to brace yourself - this post is a complete technical, Microsoft no-go zone. And I\u0026rsquo;ll be covering some pretty triggering things on mental health. But I need to say this. And, I hope what I have to say is relevant to anyone working in the industry today - regardless of who you are, what you do, or where you work. I\u0026rsquo;m grateful to Nigel Hughes for giving me the push to get this down in writing, and to other community members, like Chris Huntingford and Nick Doelman, for having the guts to share their own stories, warts and all, in public.\nLet\u0026rsquo;s start at the beginning\u0026hellip;or, more correctly, the title of this post \u0026ldquo;You look healthy.\u0026rdquo;\nI\u0026rsquo;ve heard this phrase, or variations of it, quite a few times since I got back into the 2025 fall event cycle, seeing my friends and MVP colleagues at fun events such as the Power Platform Community Conference (PPCC), Microsoft Ignite, the European SharePoint Conference (ESPC) and more. It\u0026rsquo;s always meant as a compliment, and I love to keep hearing it. I guess the reason it keeps coming up is that, over the summer, I lost nearly 3 stones (19kg) in weight. I still struggle to believe it. But I\u0026rsquo;m happy about it. And the journey is by no means over.\nAs much as I love people complimenting me on my efforts, the above is a somewhat loaded statement. Because what\u0026rsquo;s the opposite of healthy?\nUnhealthy.\nAnd I was. For around two years, if not more. Ladies and gentlemen of the jury, I present to you Exhibit A and B:\n(You can blame the hair colour, which is in no way unhealthy, on some rather mischievous Germans and their fantastic fundraising efforts at ColorCloud 2025 😛)\nLet\u0026rsquo;s take a step back and review. Christ. Look at the state of it. Not healthy. You don\u0026rsquo;t have to beat around the bush. These pictures are a sign of an individual who has let go of themselves, does not respect themselves, and, despite how they may appear and communicate, is the opposite of happy.\nTaking a look at some more recent photos, I think the difference is night and day:\nLooking at these photos makes me happy after a long time of not being happy. I\u0026rsquo;m proud of what I\u0026rsquo;ve achieved. But I need to recognise and accept the state of mind I was in, learn the appropriate lessons and make sure I don\u0026rsquo;t fall back into the same traps as before.\nRecognising the signs Winston Churchill often spoke of the \u0026ldquo;black dog\u0026rdquo; that would visit him from time to time. Low points. Moments of \u0026ldquo;depression\u0026rdquo;, for lack of a better term. I can in no way equate my own struggles to the monumental task of defeating Nazi Germany, but the experience and its description align with what I felt - more than I would have cared to admit in the past, to others or even myself.\nPerhaps to take a more modern spin on things, at (what I now recognise to be) one of my lowest points last year, I constantly had the following song on repeat:\nMGMT - Little Dark Age (Official Video)\nA great tune. But why did I keep listening to it? The lyrics resonated. Take a moment for yourself to read them here. Lines like:\nI grieve in stereo The stereo sounds strange You know that if it hides, it doesn\u0026rsquo;t go away\nI was having my own \u0026ldquo;little dark age\u0026rdquo;. Despite some significant accomplishments from a professional standpoint - becoming a shareholder in my own practice, attaining the Microsoft MVP award for the first time, and having the opportunity to see the globe - I was struggling personally. I was unhappy. I was unhealthy. I was ignoring myself and my own well-being. And I was on a path that wasn\u0026rsquo;t going to end well.\nSo, guess what? Got the same, sorrowful tune on repeat? Got that feeling that something is \u0026ldquo;off\u0026rdquo; with you, but can\u0026rsquo;t quite explain it? Then you\u0026rsquo;re in the same boat as I was. And I implore you - talk, or do something about it.\nI want to break free I was in a bad state - no question about it. And I was perhaps scared of recognising this myself. Two things helped me to see sense and to start carving a way forward:\nWaking up and smelling the coffee: Let\u0026rsquo;s not beat around the bush. I was obese and unhealthy. My eating habits were appalling, the comments, sometimes in jest, sometimes less so, from friends and family couldn\u0026rsquo;t be ignored any longer. I wasn\u0026rsquo;t doing the bare minimum on myself - so how could I expect my outlook on things to be any better? I needed to make a change, invest in myself and respect the person that I am. And that starts by making radical change - in lifestyle, physical experience and more. Simple acts of kindness from a stranger: Never underestimate how a few simple words or a brief shared experience can transform your outlook. I\u0026rsquo;m not going to embarrass the individual in question by naming them here, but after a brief introduction and experience with them at a community event, my outlook immediately started to improve. It made me realise there are many good people out there, and the importance of just being kind to others, regardless of who they are or how long you\u0026rsquo;ve known them for. I will be eternally grateful to the individual in question, who helped me \u0026ldquo;see the light in the tunnel\u0026rdquo; and set me on a better path. If you\u0026rsquo;re still sceptical, then I think Waymond from the movie Everything Everywhere All At Once says it better than I ever could: What\u0026rsquo;s working for me Every day brings its own set of challenges. Things don\u0026rsquo;t quite go as expected. Something small or big can knock you down. But I\u0026rsquo;m now very much a pessimistic optimist. Despite what may happen, I know I can control my own future and that my daily, incremental actions will help me get to where I want to be. Here are some of the things that have helped me along the way; I share them now in the hope that they are useful to you too:\nGet yo\u0026rsquo; Steps In!: You hear about people hitting the gym, doing all of the exercises, yada yada yada - and still they don\u0026rsquo;t lose any weight. Activities like running can also have potential long-term side effects if you\u0026rsquo;re not careful. So I decided to focus on a walking target instead - 10,000 steps a day. That\u0026rsquo;s just over 4 miles and, at a steady pace, should take you around 90 minutes to complete. If you can\u0026rsquo;t find 90 minutes in your daily schedule to do this, then you\u0026rsquo;ll never be able to prioritise your health. So make the time. Usually, I will wake up around 5 AM and squeeze most, if not all, of my steps in before starting to work. Then, depending on whether I\u0026rsquo;m out or working from home, the rest of the steps will fall into place as the day progresses. So try to aim for this as a starting point. There will be days that will be better than others, and days when you don\u0026rsquo;t quite make it. But get it into your head each day - I need to walk, and I need to make time for it. Gamification FTW: I\u0026rsquo;m a \u0026rsquo;90s kid, conditioned on years of Super Nintendo, beating the level and earning \u0026ldquo;rewards\u0026rdquo;. Relying on this, I can try to introduce a \u0026ldquo;fun\u0026rdquo; aspect into my daily exercise. Here, I need to thank Marijn Somers for introducing me to the Pikmin Bloom mobile game. I\u0026rsquo;m just about old enough to remember when this first came out on the GameCube, so there was a nice little nostalgia factor kicking in there. The purpose of the game is to grow your Pikmin and spread flowers across the world. How do you do this, I hear you ask? By walking! The more you walk, the more Pikmin you grow, and the more flowers you spread. In conjunction with the previous point, it provides a fun (and somewhat addictive) way to get your steps in, while also growing your own little Pikmin garden. You can even take photos of your walks, share them with your friends, and see where you have been travelling. This helped me to get past some of the \u0026ldquo;drudgery\u0026rdquo; of walking for the sake of walking, especially in the early days of my new routine. Intermittent Fasting: There\u0026rsquo;s a reason I think why the act itself forms an important part of so many world religions. And it also has proven health benefits. On a normal week, I fast from Sunday evening through to Tuesday evening. From there, I then follow my chosen diet plan stringently (see the next point). It\u0026rsquo;s very tough to do if you are not used to it. You will drive yourself crazy with the aches and hunger pangs. But drink plenty of water, have the occasional (black) coffee, and you can get through it. After a few weeks, you will adapt, and fasting will become almost second nature. The premise and concept that we must have three meals a day is a complete falsehood. Our bodies can easily go without food for many days. Do you think during the hunter-gatherer days, we were scoring a big \u0026ldquo;kill\u0026rdquo; three times a day and gorging ourselves silly? Of course not. So, recognise this simple fact of our biological make-up, and try it for yourself. Keto Diet: The basic premise of this diet is to limit the amount of carbohydrates you eat and increase your fat and protein intake. This forces your body to enter a state of \u0026ldquo;ketosis\u0026rdquo;, where it burns fat for energy instead of carbohydrates. As a carnivore, it worked really well for me. I could eat a whole chicken and then a beef steak, and not get set back at all. But if you\u0026rsquo;re a vegetarian or a vegan? It might be more of a challenge. Cutting carbs from your diet is hard, as you realise just how pervasive they are in the food chain. I\u0026rsquo;d invite you to take a closer look and, of course, fully evaluate the pros and cons before you decide to do it yourself; but it worked wonders for me. Switch your Drinks: I enjoy having a tipple. For those who know me, probably not a huge surprise there 😅 But I got into a nasty habit during Covid of drinking beers, lagers and other carbohydrate-heavy drinks. Socially drinking, in my bedroom, in a house that I hardly left, due to the lockdown. No wonder I put on so much weight. So to be faced with the prospect of dieting and not being able to have a drink filled me with dread. But it\u0026rsquo;s still possible to enjoy having a drink, in moderation and if you\u0026rsquo;re willing to switch things up. 1 pint of beer typically has around 15-20 grams of Carbohydrates. A slimline Gin \u0026amp; Tonic in comparison? Around 2 grams max. A glass of champagne? About similar. So, with careful, regulated planning, you can still have a bit of fun on a Friday night without throwing your hard work during the week out the window. Looking to the Future I\u0026rsquo;m on a journey. There are still things that I need to work on. I need to make myself more emotionally available, less of a closed book. I need to improve my interactions with others, including my close friends, work colleagues and family. I probably still drink too much (can\u0026rsquo;t blame the half-Irish in me for that one). I\u0026rsquo;ve got about another stone (6-7kg) to lose to get rid of the last excess weight. Most importantly, I need to respect and be happy with myself, to help forge stronger, long-lasting relationships with others, including what may become my next significant life partner, wherever she may be. But I wanted to celebrate my success and give the confidence to others that it is possible to make such a significant change, even if it seems challenging from the outset.\nOK, you\u0026rsquo;ve heard enough about me. The key thing is - we are all on a journey. The biggest lesson from my accomplishments over the year is that fixating on the end goal rarely makes you happy when you finally get there. Focus instead on the small things, what\u0026rsquo;s along the journey, what you can do in small increments today to get you into a good place. Reward yourself at the appropriate moments; don\u0026rsquo;t be a complete and utter puritan.\nAnd most importantly, be more like Waymond. Be kind to yourself and those around you. Make a positive difference in the lives of the people that you meet. It could be the most critical thing you do for them that day, that month, or even in their entire life.\nIf you\u0026rsquo;ve read this far, thank you. I hope what\u0026rsquo;s written here helps you deal with any struggles you may be having and get you onto a better path. Please do reach out to me at any time if you want to chat about anything that has come up for you in this post. I\u0026rsquo;d be more than happy to hear from you.\n","date":"2025-12-19T00:00:00Z","image":"/images/YouLookHealthy-FI.png","permalink":"/you-look-healthy-reflections-on-mental-and-physical-health/","title":"\"You Look Healthy\": Reflections on Mental and Physical Health"},{"content":"With the recent introduction of the new modern advanced find experience, we\u0026rsquo;ve seen a major step forward in modernising a key piece of functionality tied to the \u0026ldquo;classic\u0026rdquo; experience. And, to be fair to Microsoft, this feature has pretty good parity with what we\u0026rsquo;re used to in the old experience. Not only can we use the modern experience to find the data we want, but it also supports the creation of personal views, and, for our developer friends, we can download the underlying FetchXML query. Change can often be a rocky road in today\u0026rsquo;s cloud world, but with this, I think we have an example of how to do it correctly. 😀\nDespite this, I have ONE gripe and slightly annoying thing that came up recently on a project. Whereas the old Advanced Find experience allows us to search for all possible tables in the Dataverse environment, we observed that the new experience did not follow this expectation. Indeed, key tables we knew to be present wouldn\u0026rsquo;t appear on the list when we searched for them. Frustrating, but on closer inspection, there\u0026rsquo;s a pretty good explanation - and solution - for this.\nLet\u0026rsquo;s start by looking at an example. Below, I have an expertly built model-driven app that is surfacing out just the Account table:\nNote as part of this that I have:\nAdded the Account table to the Sitemap The Account table is present within the In your app area of the app configuration When we hit the play button and attempt to use the new Advanced Find experience, we observe that only the Account table is visible:\nAs it turns out, the crucial element here is that list of tables that appear within the In your app section mentioned above. For those more familiar with the classic experience, the list of tables we see below also indicates this:\nIndeed, the classic experience becomes crucial for this current problem, as it appears that this is the only route that allows us to add/remove tables to this list:\nWith the Contact table added and after re-publishing the app, we can see that this is now present on the list of options:\nThe only other option could be to add the tables into the new modern app designer via the New Page option, but doing this includes them in the sitemap, which may not be your desired outcome.\nIt perhaps makes sense why the new experience behaves in this fashion, and why we would have to go through this extra step to surface any tables we may be missing. In most situations, it is preferable to include them as part of the sitemap instead. This step has the benefit of indirectly including the table in the In your app list while making it easier for users to find. However, hopefully, this post is useful if all you care about is the modern Advanced Find experience. As a final tip, also ensure you include the appropriate forms/views for the table. 😉\n","date":"2022-12-18T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/adding-missing-tables-modern-advanced-find-model-driven-power-apps/","title":"Adding Missing Tables to Modern Advanced Find Experience (Model-Driven Power Apps)"},{"content":"Welcome to the final post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. In today\u0026rsquo;s post, I wanted to consolidate all of the content from the series into a single, concise post for ease of access. I\u0026rsquo;ll also provide some general advice and tips that I hope are useful for when you sit the exam.\nThis series has aimed to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nMicrosoft has split the PL-600 exam into several different areas based on the specification found here. Therefore, I\u0026rsquo;ve linked the relevant blog/video content from the series for each applicable subject below.\nPerform solution envisioning and requirement analyses (35-40%) Initiate solution planning evaluate business requirements identify Microsoft Power Platform solution components identify other components including existing apps, AppSource apps, third-party components, and components from independent software vendors (ISV) identify and estimate migration and integration efforts Identify organization information and metrics identify desired high-level organizational business processes identify business process improvement opportunities assess an organization\u0026rsquo;s risk factors review key success criteria Identify existing solutions and systems evaluate an organization\u0026rsquo;s enterprise architecture identify data sources needed for a solution define use cases and quality standards for existing data identify and document an organization\u0026rsquo;s existing business processes Capture requirements refine high-level requirements identify functional requirements identify non-functional requirements confirm that requirements meet an organization\u0026rsquo;s goals identify and document an organization\u0026rsquo;s desired business processes Perform fit/gap analyses determine the feasibility of meeting specific requirements evaluate Dynamics 365 apps and AppSource options to solve requirements address functional gaps through alternate solutions determine the scope for a solution Blog Posts Exam PL-600 Revision Notes: Initiating Planning for your Power Platform Solution\nExam PL-600 Revision Notes: Evaluating an Organisation, Metrics and Existing Systems\nExam PL-600 Revision Notes: Capturing Requirements and Performing a Fit/Gap Analysis for the Power Platform\nVideos PL-600 Exam Prep: Working with Dynamics 365 Applications and AppSource Architect a solution (40-45%) Lead the design process design the solution topology design customizations for existing apps design and validate user experience prototypes identify opportunities for component reuse communicate system design visually design application lifecycle management (ALM) processes design a data migration strategy design apps by grouping required features based on role or task design a data visualization strategy design an automation strategy that uses Power Automate Design the data model design entities and fields design reference and configuration data design relationships and relationship behaviors determine when to connect to external data versus import data design data models to address complex sets of requirements Design integrations design collaboration integrations design integrations between Microsoft Power Platform solutions and Dynamics 365 apps design integrations with an organization\u0026rsquo;s existing systems design third-party integrations design an authentication strategy design a business continuity strategy identify opportunities to integrate and extend Power Platform solutions by using Microsoft Azure Design the security model design the business unit and team structure design security roles design field security design security models to address complex sets of requirements determine security model management policies and processes identify Azure Active Directory groups and app registrations required to support a solution identify data loss prevention (DLP) policies for a solution determine how external users will access a solution Blog Posts Exam PL-600 Revision Notes: Designing a Power Platform Solution\nExam PL-600 Revision Notes: Designing a Data Model for the Power Platform\nExam PL-600 Revision Notes: Designing Integrations for the Power Platform\nExam PL-600 Revision Notes: Designing a Security Model for the Power Platform\nVideos PL-600 Exam Prep: Customizing Existing Dynamics 365 Applications PL-600 Exam Prep: Evaluating Power Automate Capabilities PL-600 Exam Prep: Working with External Data Sources in the Power Platform PL-600 Exam Prep: Building Crow\u0026rsquo;s Foot Notation Diagrams using Microsoft Visio PL-600 Exam Prep: Power Platform Admin Center Overview PL-600 Exam Prep: Configuring Azure Synapse Link for Dataverse PL-600 Exam Prep: Overview of Microsoft Dataverse Security Features PL-600 Exam Prep: Working with Azure Active Directory Groups in the Power Platform PL-600 Exam Prep: Working with Azure Application Registrations in the Power Platform PL-600 Exam Prep: Working with Data Loss Prevention (DLP) Policies Implement the solution (15-20%) Validate the solution design evaluate detail designs and implementation validate security ensure that the solution conforms to API limits resolve automation conflicts resolve integration conflicts Support go-live identify and resolve potential and actual performance issues troubleshoot data migration resolve any identified issues with deployment plans identify factors that impact go-live readiness and remediate issues Blog Posts Exam PL-600 Revision Notes: Implementing a Power Platform Solution\nGeneral Exam Preparation Tips Hands-on preparation is essential if you wish to do well in the exam. You should ideally set up a Power Apps Developer Plan environment that you can use to experiment with the core functionality within the Power Platform. Keep abreast of the latest Microsoft Docs and Learning Path materials that are related to this exam, as the platform is continually changing all of the time. Familiarise yourself with the general format, length and expected types of questions within a Microsoft exam. If you want to get a feel for the level and type of questions to expect on the exam, you can also purchase an official practice test from Measure Up. Although it is possible to take your Microsoft exam at a test centre, chances are you\u0026rsquo;ll sit your exam using the online proctored experience. Take some time to familiarise yourself with the process involved here, and perform a system test well in advance of your exam date - the last thing you need on the day of the exam is to stress out due to a system or access issue. I hope that you\u0026rsquo;ve found this series useful. Good luck when sitting the exam, and let me know how you got on!\n","date":"2022-05-01T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-series-roundup/","title":"Exam PL-600 Revision Notes: Series Roundup"},{"content":"Welcome to the seventh post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Previously, we evaluated the various security features and capabilities on offer within the platform and as part of Microsoft Dataverse. In today\u0026rsquo;s post, we move on to the final area of the exam, titled Implement the solution. This exam area has the smallest weighting (15%-20%), so we can expect a minimal amount of questions emerging. Nevertheless, Microsoft expects candidates to have a good understanding of the following topics:\nValidate the solution design\nevaluate detail designs and implementation validate security ensure that the solution conforms to API limits resolve automation conflicts resolve integration conflicts Support go-live identify and resolve potential and actual performance issues troubleshoot data migration resolve any identified issues with deployment plans identify factors that impact go-live readiness and remediate issues After many months of hard work and sleepless nights, the go-live of our solution could be both exciting and nerve-racking in equal measure. At this stage in the journey, the solution architect may fall into the trap of thinking that no further work is required. In actual fact, their involvement at this crucial juncture is essential, and they will need to remain on-hand to support the team and ensure the deployment is a success for the organisation. With all this in mind, let\u0026rsquo;s dive into the aforementioned areas and evaluate the variables that could make or break your go-live.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nValidating the Solution Against Requirements Once we\u0026rsquo;ve been able to get a workable version of our solution created, we\u0026rsquo;ll want to then obtain the necessary validation to confirm what we\u0026rsquo;ve got built fits the requirements for the business.\nAt this juncture, it can be beneficial to start going back to the original business requirements we put together at project execution and reviewing the various acceptance criteria set out within them. Ideally, we should have been performing ongoing testing of our solution, in the form of unit testing or similar, to develop this confidence early on; notwithstanding this, a formal round of testing by an objective audience is still advisable. Assuming that we have a solid set of requirements that we\u0026rsquo;ve been working towards, validation at this stage should be straightforward. We should also have a set of measurable outputs that we can use to determine a pass or fail for a specific element of our solution.\nOnce we, as a team, are happy that the solution is fit for purpose, it will generally be a good idea to conduct a formal round of user acceptance testing (UAT), with a candidate group of actual users of the end solution. This exercise may occur late on during our development cycle, but ideally, the earliest we can get our solution into the hands of these key stakeholders, the better. It will be far easier to address issues early than a week before our intended go-live. 😉\nAfter all validation has been performed and everyone is happy, the solution architect can then look to get the project team together and confirm this formally. This may then provide an excellent opportunity to discuss how best to get the solution deployed out into production and discuss any dependencies required as part of this, all of which are essential elements as part of our deployment plan. But more on this later on.\nValidating the Security Model This can often be the most challenging and under-appreciated aspect of our testing. Part of the reason for this is that people tend to forget it entirely. For this reason, the solution architect must get a good rein over all this. To help us as part of this, here are a few things I would recommend:\nLevel up for Dynamics 365/Power Apps Extension: This must-have extension, which works with both Google Chrome and Edge Chromium, has an incredibly valuable feature we can use in the context of our security model - impersonation. This, therefore, allows System Administrators (or those with the correct privileges assigned) to use a model-driven application as another user, with all relevant security permissions assigned. Functionality like this can be invaluable in performing early testing to ensure we get the behaviour we expect. Test User Accounts: To test the different security roles, column (field) security profiles and other components most effectively, I\u0026rsquo;d recommend setting up a dedicated account that testers can use to emulate the various personas we\u0026rsquo;ve established for our solution. Ideally, this should be a real user account with a name like John Doe or Jane Doe. We can then modify the permissions assigned to this account as we work through the different categories of users for which we are testing scenarios. User Acceptance Testing: This was highlighted earlier but again provides a valuable opportunity to ensure that things work as expected with our actual set of users who plan to leverage our solution. API Limits Overview For any Power Platform solution, particularly one that leverages Microsoft Dataverse, the solution architect needs to be acutely aware of not only the entitlement limits enforced at the API level, but also the service protection limits as well.\nLet\u0026rsquo;s focus first of all on entitlement limits. These dictate the number of individual requests users can make within 24 hours. Microsoft defines this somewhat broadly. From a Dataverse standpoint, Microsoft class a request as any CRUD action targeting the platform via the SOAP / REST API and any internal action triggered by a workflow, plug-in, or other types of automation. The number of requests allocated to a user depends on the type of license assigned to them. A general rule of thumb is that the more you are paying, the more you get. The good news is that if a user has multiple licenses assigned, then all requests are summed up accordingly, instead of just defaulting to the highest amount across all licenses. If a user exceeds the number of requests aligned to them, Microsoft may start throttling requests and users will see a major degradation in performance. When it comes to any non-interactive user account type, such as an application user, then different rules apply and Microsoft will instead deduct requests from the pooled amount at the tenant level. Again, the precise number of requests available is dictated by the types of licenses you have and the number.\nIn addition to these limits, integrations have to factor in potential errors should we fire too many requests at once. Microsoft outlines the threshold for this trigger, and should we find ourselves in the situation, we will receive 429 error codes. These responses always return details of when it\u0026rsquo;s safe for the caller to re-try the request, so it becomes possible for us to modify our applications to leverage this information. Should we perform a heavy-duty integration involving the platform, I recommend reading through some of the strategies Microsoft advises us to adopt and alter our application accordingly.\nResolving Automation / Integration Conflicts Automations and integrations in this context can take the form of a simplistic cloud flow that communicates with the Microsoft 365 Graph API, right through to a complex integration that consumes several Azure or externally based systems. The solution architect should have an excellent grasp of the landscape here, varying from project to project.\nIdeally, most of our potential conflicts should have been resolved as part of a dedicated integration testing round. Therefore, the solution architect needs to account for this and other considerations, such as that these services may require time and effort to prepare for testing. In addition, the teams who maintain these systems may be super-duper whizzy experts with the systems they look after but clueless when it comes to the Power Platform itself. Therefore, the solution architect should take the lead in coaching and sharing knowledge with these teams as needed.\nOnce an issue has been identified, it may be straightforward for us to address any potential issues on our side, but keep in mind this may not necessarily be the case for the external systems we are consuming. At worst, it could even impact the expected go-live of our solution. At this stage, it may be necessary for the solution architect to step in and work out some form of compromise. Perhaps the issue we\u0026rsquo;ve discovered can be addressed after the go-live, or an acceptable workaround can be implemented. Early and frank communication to the wider project team is essential, even if the news we are bearing is not always positive.\nTesting, Analysing and Resolving Performance Issues There\u0026rsquo;s nothing worse than having an IT system that is sloooooooooooooooooooooooow. This can be one of the significant elements that can hurt the adoption of our solution and introduce unnecessary frustration into our overall system. The solution architect should be consistently on guard regarding this topic and adopt a hawkish posture when it comes to any of the following topics in particular:\nApp Design: How we\u0026rsquo;ve built our model-driven and canvas apps can significantly impact how they run. If, for example, we\u0026rsquo;ve created a model-driven application that loads every single view and form for a table, don\u0026rsquo;t be surprised if things start taking a while to load. Similarly, if we have views containing too many attributes or other components, this can also negatively impact things. With this in mind, we should be designing with simplicity and fundamental clarity in mind. If a particular element isn\u0026rsquo;t adding any value, we should remove it altogether. Security Model: This can have an indirect effect on performance. Suppose we have granted a set of users global Read access to a table object. Whenever a user navigates to specific views in the application, time and resources will be spent to retrieve all of these rows from the backend table. Our security model should continuously be tweaked to ensure we introduce performance benefits where applicable. It becomes somewhat trivial to introduce these gains through clever use of some of the features we\u0026rsquo;ve spoken out about previously in the series, such as business units and security roles. Client vs Server Side Issues: With advances in web development over the years, many of the modern features of the platform typically expect a modernised browser and device to work effectively. So if we do find ourselves creaking along with an ancient Windows machine with legacy browsers, don\u0026rsquo;t be surprised if users start to experience issues using the Power Platform. Likewise, if our environments are hosted in a geographic region far from where a user is connecting, it\u0026rsquo;s natural to expect latency and a performance dip. The solution architect should grasp where most users are based and then take steps to align towards the closest environment location for our solution. In addition, they should read and ensure all users meet the minimum requirements for services such as Power Apps. Throughout all of this, the Power Platform Admin Center can act as an excellent resource to understand and gauge problems. The center includes a range of Admin Analytics reports that we can use to understand metrics such as the amount of storage being consumed, the failure rate for specific APIs and the locations where users are accessing the resources of an environment from. These metrics could give us clues as to why we may be seeing some potential performance degradation, and the solution architect can then use them to initiate further investigations as needed. We can also use Monitor for model-driven apps to generate further data and clues as to what could be causing bottlenecks.\nDeployment Plans and Mitigating Go-Live Impacts Once all testing has been completed, and any remedial work has been dealt with appropriately, the solution architect can then start sharing details of the deployment plan. In an ideal world, this document should already exist in draft form. The primary task at this stage is to dust this off, update where appropriate and ensure that all key stakeholders and project team members have a copy. Ideally, your deployment plan should include:\nA breakdown of all critical tasks. An outline of any dependencies or requirements. An owner for each task. A due date or deadline. Any additional or useful commentary. Using some form of visual aid, such as a Gantt chart or similar, could be best to convey the steps involved, alongside any timelines.\nAlthough our intentions may be good, and we\u0026rsquo;ve made commitments to the business that we will be going live on X date, there can be a multitude of reasons which could impact our go-live negatively and force us to re-evaluate our plan. This can include things such as:\nA dependent system not being ready as part of an integration or automation. A key business event, such as another system launch or BAU operations, such as the end of month invoice processing. A realisation that a critical requirement or feature has not been accounted for in the solution. Unavailability of resources due to planned / unplanned absences. By having things documented clearly in advance, the solution architect can quickly adapt to these situations and present an updated plan back to all key stakeholders as and when required. Like what we touched upon previously, early and frank communication in all these matters is generally most appropriate.\nAnother important consideration, which could impact our go-live, is if we plan our release on our around the same time as a release wave. The solution architect should be fully aware that there are two major releases into the Power Platform every year, in the spring and autumn. Microsoft will always publish the dates when these upgrades will take place for each region, so there is no excuse for evaluating this in advance and, as much as possible, ensuring that we don\u0026rsquo;t clash with any of these major releases. For an example of the type of information that\u0026rsquo;s published, you can review the documentation for the 2022 release wave 1 plan and deployment schedule for each region.\nAnd with that, we are at the end of this series! Next week, we\u0026rsquo;ll do a wrap-up post that brings the various blog posts and video content into a single location. Until next time, happy revising, and I hope you\u0026rsquo;ve found the series useful!\n","date":"2022-04-24T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-implementing-a-power-platform-solution/","title":"Exam PL-600 Revision Notes: Implementing a Power Platform Solution"},{"content":"Welcome to the sixth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. After a disgracefully long pleasantly short hiatus from when we last discussed designing integrations targeting the Power Platform, I\u0026rsquo;m pleased to be resuming this series by looking at the final topic within the Architect a solution area of the exam, titled Design the security model. For this area, candidates must demonstrate knowledge relating to the following:\nDesign the security model\ndesign the business unit and team structure design security roles design field security design security models to address complex sets of requirements determine security model management policies and processes identify Azure Active Directory groups and app registrations required to support a solution identify data loss prevention (DLP) policies for a solution determine how external users will access a solution Security remains a paramount concern for any IT solution that an organisation adopts. Naturally, the solution architect of any Power Platform solution will need to be able to recommend and advise on the best solution to adopt. For the most robust security for any application we build, the involvement of Microsoft Dataverse over other data sources, such as SharePoint Online, may also become necessary. Let\u0026rsquo;s unpack all of this and dive deeper into each of the aforementioned features.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nDataverse Security Recap You should have already passed either the PL-200 or PL-400 exam at this stage of your Microsoft certification journey. As such, you should be able to readily recall the various details about Microsoft Dataverse\u0026rsquo;s security capabilities. Nevertheless, let\u0026rsquo;s spend a few moments to recap the core elements:\nBusiness Units: As our highest-level mechanism of defining security boundaries within Dataverse, we can use business units to physically separate users, teams and rows from others and as a means of modelling out complex hierarchies within our organisation. Teams: Most useful when working in the context of multiple users, don\u0026rsquo;t forget that we have multiple different types of teams available to us. Owner Teams are the most common type that we use, that support security role assignment and row ownership, with Access Teams being most valuable in the context of Access Team Templates and complex row sharing requirements in the platform. Security Roles: Using this feature, we can not only control the precise permissions a user has access to, but also the most appropriate boundary that this should apply to - either globally across the entire Dataverse environment, right through to business unit level and lower. Column (Field) Security Profiles: Best for those situations where we are working with highly sensitive categories of data, our profiles provide a means of restricting Read, Update or Create privilege on a specific column within your Dataverse environment. Further discussion of the technical capabilities of each of these is beyond the scope of this exam, so I\u0026rsquo;d recommend going back a few steps to the earlier exams if you have any significant knowledge gaps.\nHow To Design the Best Dataverse Security Model With such a wide array of features available, it can sometimes be tricky to figure out what (and what won\u0026rsquo;t) be necessary for us to architect out within Dataverse and get to that ultimate sweet spot - protecting the organisation and not frustrating users in the process.\nThe basic rule of thumb I always adhere to here, and I would advise any solution architect, is to KISS - Keep It Simple, Stupid. This adage is particularly true when it comes to Business Units. The temptation may be to model out your organisational structure 1:1 via Business Units, thereby resulting in many tens or hundreds of new Business Units being created. This is not only going to make it incredibly difficult to maintain your solution in the long term but also cause you potential performance problems too. Far better instead to think carefully about the minimum number of required security boundaries within your organisation and only implement these as Business Units.\nAnother thing I would advise is to use what\u0026rsquo;s already there, particularly when it comes to Security Roles. You\u0026rsquo;ll notice that every environment contains the same set of Roles, such as Basic User. These roles contain many of the base privileges users need to open a model-driven app and interact with most (if not all) of the different tables within the Common Data Model. Attempting to create new security roles from blank and re-populating them with privileges can become tedious (I speak from experience here 😉), so far better instead to take a copy of any existing Role and remove the privileges you don\u0026rsquo;t need.\nFinally, I\u0026rsquo;d always recommend \u0026ldquo;mixing-and-matching\u0026rdquo; between the various features discussed. You\u0026rsquo;ll rarely be able to achieve your overall requirements by using just, for example, Security Roles. Like the Power Platform itself, each feature is powerful in isolation but more effective when we combine them together. Often, the most effective security model is the one that judiciously and prudently uses most, if not all, of the capabilities within Dataverse to achieve your overall requirements.\nDemo: Overview of Microsoft Dataverse Security Features In this video, we\u0026rsquo;ll dive into Microsoft Dataverse and perform a quick recap to see how we can work with business units, teams, security roles, and column (field) security profiles:\nAzure Active Directory Groups in the Power Platform For anyone with at least some basic experience of Active Directory, you\u0026rsquo;ll be familiar with the concept of Security Groups. These are typically used in the on-premise world as a means of applying group policies, sharing access and applying other uniform sets of privileges to teams, departments or the organisation as a whole. In the cloud world, from an Azure Active Directory (AAD) perspective, the concept is still very much alive; partly as a backward compatibility feature but also as a means of achieving the same kinds of things we would do in the on-premise world for our users. We can primarily use AAD groups in two ways within the Power Platform:\nTo control access to specific Dataverse environment(s). New or existing Dataverse environments can be linked to any group set up on the tenant, and only members of the group will be provisioned as Users. All other Users will be ignored or deactivated if existing already. Solution architects should always advocate and utilise this functionality, as it will typically avoid situations where all users get created into an environment. To link as part of a Team row we create in Dataverse. By doing this, membership of the team automatically updates based on the membership list defined in AAD, thereby reducing the administrative effort involved in maintaining this within Dataverse. This functionality also supports Microsoft 365 (Office 365) groups as well. It can be rare that an organisation is not already using AAD groups in some capacity, so it behoves a solution architect to investigate, identify, and promote the usage of anything existing in the tenant that could benefit the project.\nDemo: Working with Azure Active Directory Groups in the Power Platform In this video, we\u0026rsquo;ll see how we can create security groups within the Azure Portal, and we can then use these within the Power Platform:\nAzure App Registrations: Who, What, Why Application Registrations come into play when we plan to do any form of integration or pro-code/fusion extensibility targeting the Dataverse platform. Given that various legacy authentication mechanisms, such as Basic, are now no longer supported, we must instead turn to the capabilities on offer as part of the OAuth 2.0 protocol within Azure Active Directory. As well as offering multiple potential authentication patterns, it\u0026rsquo;s also incredibly easy to use, regardless of the tech stack we\u0026rsquo;re using.\nOne way in which we can use them, from a Power Platform and Dataverse standpoint, is to authenticate into the platforms Web API as our user account.. As noted in the documentation, there are some specific setup steps required (such as enabling Implicit Flow), but the general pattern results in an access token being generated, which we then use to gain entry into our Dataverse environment. In the implicit pattern, what we are doing is impersonating our existing user account, utilising the Application Registration and its permissions to achieve this.\nTo go a step further, Application Registrations can also then be used to create an Application User in a Dataverse environment. This approach is most suitable for when we plan to have a server-to-server based integration, and don\u0026rsquo;t want to rely on a specific individuals user account. The user account that results from this process behaves and acts the same way as any other user. This means we can assign it security roles, rows (records) and place them within our chosen business unit. I would typically recommend using Application Users for scenarios such as connecting to Dataverse from Azure DevOps or to run plug-ins/cloud flows with an account using scoped privileges.\nDemo: Working with Azure Application Registrations in the Power Platform In this next video, we\u0026rsquo;ll walk through the process of creating an Application Registration in the Azure Portal and how we can then use this to create an Application User within a Dataverse environment:\nData Loss Prevention (DLP) Policies Overview The primary benefit of a solution such as the Power Platform is the flexibility it can give you when easily connecting to different applications and services. However, this can also be the platform\u0026rsquo;s Achilles heel if we don\u0026rsquo;t manage things correctly. Suppose a scrupulous user in our organisation, who is leaving the next day permanently, decides to create a cloud flow that exports all of our prospect data from our Dynamics 365 Sales system and transfers this across into a personal Dropbox. Not only has this erstwhile colleague got the inside track on what our selling team are doing, but we\u0026rsquo;ve also - invariably - caused a data breach. Not a great situation to be in or an optimal return on investment, I\u0026rsquo;m sure you\u0026rsquo;ll agree.\nTo help mitigate against these common scenarios and provide administrators with control over their environments, we can configure DLP policies to dictate precisely which connectors we want to use. Now, we shouldn\u0026rsquo;t be aiming to use this capability to turn our environments into the Power Platform\u0026rsquo;s equivalent of Fort Knox. Instead, we should consider and categorise the connectors we want to allow, based on the users involved, their department and what they are trying to achieve with the Power Platform. Once our DLP policies are configured, they take effect immediately, and our users will start seeing the appropriate notifications appear if they breach a policy. DLP policies work primarily in the context of our cloud \u0026amp; desktop flows and our canvas apps. Solution architects should be aware of this and be familiar with the experience of setting them up within the Power Platform Admin Center. Typically, we\u0026rsquo;ll also need to instruct and guide the organisation towards appropriately using them.\nDemo: Working with Data Loss Prevention (DLP) Policies In this final video, we\u0026rsquo;ll go through the steps involved in creating policies to control what connectors can be used across your different Power Platform environments:\nExternal Users and the Power Platform The majority of apps, automations, and reporting tools we build-out within the Power Platform will be for internal consumption within the organisation only. This will be particularly true if we work with sensitive, personally identifiable (PI) or other commercially sensitive categories of data. Having it so that all and sundry can access this information freely could cause us some difficulty. 😅 Notwithstanding this, there are occasional situations where giving external users access is necessary. The main scenarios where we see this come into play include:\nPower Apps Portals: This is perhaps the most obvious and, arguably, the most secure mechanism at our disposal, provided of course, our portal has implemented user authentication. Any possible information type existing within our Dataverse environment can be exposed out onto the portal. We have fine-tuned control over precisely what information we can show. We can even go a step further and have different roles assigned to different categories of users, using a set of security permissions that align strongly to the various security model principles of Dataverse we\u0026rsquo;ve reviewed already. Power BI: In most scenarios, we\u0026rsquo;ll be consuming Power BI internally, either personally or via Workspaces (provided our users are licensed somehow). However, there are opportunities to embed Power BI report content in a website, publish our report to any web location or even to go a step further and leverage Power BI Embedded analytics instead. Although these options may be tempting, the solution architect needs to review the benefits and disadvantages and ensure that the organisation doesn\u0026rsquo;t accidentally share data that it shouldn\u0026rsquo;t. Connectors: As we should already have an excellent awareness of, the sheer plethora of connectors available to us makes it possible to get information exposed to external users. For example, we could use the Twitter connector to post to social media about the latest updates relating to our organisation for everyone to see and interact with. The solution architect should carefully review any potential licensing implications for other scenarios. If not planned carefully, we could find ourselves falling foul of multiplexing, which the latest version of the licensing guide defines as follows:\nMultiplexing refers to the use of hardware or software that a customer uses to pool connections, reroute information, or reduce the number of users that directly access or use the Power Apps, Power Automate and Power Virtual Agents service. Multiplexing does NOT reduce the number of subscription licenses of any type required to access the Power Apps, Power Automate and Power Virtual Agents apps. Any user or device that accesses Power Apps, Power Automate and Power Virtual Agents apps, directly or indirectly must be properly licensed.\nWhile the patterns above could open a technical route to providing external users access to our Power Platform solution, we would nonetheless find ourselves in breach of our licensing agreement. We could then expect an unwelcome knock on the door in the future by Microsoft if we are discovered. 😉 An old saying stands true here; just because we can do something doesn\u0026rsquo;t mean we should.\nWe\u0026rsquo;re on the home stretch now as we get towards the end of this series. In the final post next time around, we\u0026rsquo;ll tackle the final exam area, all concerning how we implement a Power Platform solution\n","date":"2022-04-17T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-designing-a-security-model-for-the-power-platform/","title":"Exam PL-600 Revision Notes: Designing a Security Model for the Power Platform"},{"content":"When working with the Microsoft Dataverse platform, you sometimes get presented with error messages in the strangest circumstances. Take, for example, a recent scenario where we were getting the following error message whenever we attempted to save a Case row within Dynamics 365 Customer Service:\nThe user with SystemUserId=1e18d5cd-72a6-462a-98ab-eb76b1f17b79 in OrganizationContext=e1c03efe-c3a0-4a35-9ed7-05f49ab97918 is not licensed, and its SystemUserAccessMode=0 is not either of (NonInteractive=4, SetupUser=1) The error would fire against any operation we targeted against the table - such as saving a row or executing a plug-in that performed some action against the table. A close reading of this error would suggest that something is going on as part of the transaction invoking the context of a user that is no longer active within the environment. The joys of inheriting a system from another project. 😉 We checked a few things in the first instance on any plug-ins registered on the Case table:\nThe value of the Run in User\u0026rsquo;s Context setting on all appropriate plug-in steps to ensure that the original developer hadn\u0026rsquo;t put it to the disabled user. Verified that the code wasn\u0026rsquo;t impersonating the user in question. Finally, as a punt, we noticed that the table had several real-time workflows executing against it. We decided to deactivate all of these workflows and then re-activate them, working on the assumption that something had been cached relating to the disabled user. This was despite the fact that another active service account owned the workflows. Doing this resolved the issue, and much to our surprise, the error no longer appeared.\nI must say, this is the strangest Dataverse platform error I have faced for a while, and I\u0026rsquo;m still not 100% sure why the steps we followed resolved the issue. Answers on a postcard below if you can share any insights, but I hope this proves helpful if you encounter the same problem yourself in the future.\n","date":"2022-04-03T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/resolving-user-not-licensed-errors-when-saving-a-row-microsoft-dataverse/","title":"Resolving User Not Licensed Errors When Saving A Row (Microsoft Dataverse)"},{"content":"A couple of years ago, I blogged on how to use YAML build pipelines to extract Solutions from the Microsoft Dataverse platform. If you aren\u0026rsquo;t already fully embracing (perhaps with anger) YAML over \u0026ldquo;classic\u0026rdquo; pipelines, this would be something that I\u0026rsquo;d urge you to start thinking about. YAML pipelines are where Microsoft will be making all future investments from a feature standpoint. Much like our \u0026ldquo;classic\u0026rdquo; experience within the Power Platform, we can expect the same end destination for both experiences from an availability standpoint. 😉 Fortunately, Microsoft provide us with a mechanism to support our migration journey into them. And, it probably goes without saying at this juncture, but you should not be using \u0026ldquo;classic\u0026rdquo; pipelines for any new work targeting your DevOps environment.\nWith all that out of the way, let\u0026rsquo;s turn to the topic of today\u0026rsquo;s post, namely, revisiting the outline approach suggested in my previous blog post. The post proposes using a YAML pipeline structure similar to the one below:\nname: $(TeamProject)_$(BuildDefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r) trigger: none schedules: - cron: \u0026#34;0 20 * * *\u0026#34; displayName: Daily Build branches: include: - MyArea/MyBranch always: true jobs: - job: ExtractMySolution pool: vmImage: \u0026#39;windows-latest\u0026#39; steps: - task: PowerPlatformToolInstaller@0 inputs: DefaultVersion: true - task: PowerPlatformSetSolutionVersion@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;MySolution\u0026#39; SolutionVersionNumber: \u0026#39;1.0.0.$(Build.BuildID)\u0026#39; - task: PowerPlatformExportSolution@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;MySolution\u0026#39; SolutionOutputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\MySolution.zip\u0026#39; AsyncOperation: true MaxAsyncWaitTime: \u0026#39;60\u0026#39; - task: PowerPlatformUnpackSolution@0 inputs: SolutionInputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\MySolution.zip\u0026#39; SolutionTargetFolder: \u0026#39;$(Build.SourcesDirectory)\\JJG.MyProject\\MySolution\u0026#39; - task: CmdLine@2 inputs: script: | echo commit all changes git config user.email \u0026#34;devops@mydomain.com\u0026#34; git config user.name \u0026#34;Automatic Build\u0026#34; git checkout MyArea/MyBranch git add --all git commit -m \u0026#34;Latest solution changes.\u0026#34; echo push code to new repo git -c http.extraheader=\u0026#34;AUTHORIZATION: bearer $(System.AccessToken)\u0026#34; push origin MyArea/MyBranch While this will get the job done for us, it does have one glaring issue. What happens if we work with multiple solutions as part of our Dataverse project? Using the example above, we would have to go in and define a pipeline per solution to achieve our end goal. Not exactly an efficient solution, and one which can add to our problems from an overall management standpoint. Surely, it would be far better to be able to do all of this as part of a single pipeline instead?\nThankfully, it turns out that this scenario is entirely possible via YAML pipelines by simply leveraging two specific areas of functionality:\nParameters: As their name implies, parameters allows us to dynamically modify the flow of our pipelines based on values we define within them. We have access to several different data types for our parameters, including an object type that we can use to store a list of items. So, for our current purposes, we could use this to hold a list of solution files we want to work with, like so: parameters: - name: solutions type: object default: - DemoSolution - AnotherDemoSolution Each Keyword: Although YAML isn\u0026rsquo;t typically the territory for carrying out complex actions that need to be iterated through, the each keyword does provide us with the ability to execute one or multiple tasks, based on a list of values. So, in this case, we can iterate through each of the values within our solutions parameter object using a snippet similar to this: steps: - ${{ each solution in parameters.solutions }}: #Do stuff here. Reference the current list item by using ${{ solution }} - task: CmdLine@2 inputs: script: | echo Processing ${{ solution }}... With knowledge of both of these features, we can then look to make adjustments to the above full YAML file as follows:\nname: $(TeamProject)_$(BuildDefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r) trigger: none schedules: - cron: \u0026#34;0 20 * * *\u0026#34; displayName: Daily Build branches: include: - MyArea/MyBranch always: true parameters: - name: solutions type: object default: - DemoSolution - AnotherDemoSolution jobs: - job: ExtractMySolution pool: vmImage: \u0026#39;windows-latest\u0026#39; steps: - task: PowerPlatformToolInstaller@0 inputs: DefaultVersion: true - ${{ each solution in parameters.solutions }}: - task: PowerPlatformSetSolutionVersion@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;${{ solution }}\u0026#39; SolutionVersionNumber: \u0026#39;1.0.0.$(Build.BuildID)\u0026#39; - task: PowerPlatformExportSolution@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;${{ solution }}\u0026#39; SolutionOutputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\${{ solution }}.zip\u0026#39; AsyncOperation: true MaxAsyncWaitTime: \u0026#39;60\u0026#39; - task: PowerPlatformUnpackSolution@0 inputs: SolutionInputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\${{ solution }}.zip\u0026#39; SolutionTargetFolder: \u0026#39;$(Build.SourcesDirectory)\\JJG.MyProject\\${{ solution }}\u0026#39; - task: CmdLine@2 inputs: script: | echo commit all changes git config user.email \u0026#34;devops@mydomain.com\u0026#34; git config user.name \u0026#34;Automatic Build\u0026#34; git checkout MyArea/MyBranch git add --all git commit -m \u0026#34;Latest solution changes.\u0026#34; echo push code to new repo git -c http.extraheader=\u0026#34;AUTHORIZATION: bearer $(System.AccessToken)\u0026#34; push origin MyArea/MyBranch When your pipeline executes, it will then generate (in this example) six tasks to execute for each of the solution values you feed through. You can adjust the list defined in the YAML to add other solutions in as needed or, what\u0026rsquo;s even better, you can also override the list when triggering the pipeline manually:\nOverall (and if you don\u0026rsquo;t mind me saying so), I think this alternate approach is better. We\u0026rsquo;ll rarely be working with just a single solution within Microsoft Dataverse. And, ideally, if we are doing frequent extracts of our solutions into Git, it\u0026rsquo;s far better (arguably) to do this as part of one fell swoop instead of maintaining several pipelines. Despite some of the difficulties that YAML pipelines can present from an authoring standpoint, it\u0026rsquo;s hard to deny that it can be quite a powerful tool to use\u0026hellip;provided we know what it can do first, of course. 🙂\n","date":"2022-03-20T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/extracting-microsoft-dataverse-solutions-using-yaml-revisited-azure-devops/","title":"Extracting Microsoft Dataverse Solutions Using YAML Revisited (Azure DevOps)"},{"content":"If you\u0026rsquo;ve been working any length of time with Dynamics 365 Customer Engagement / Microsoft Dataverse Application Users, you will be well versed in the process of setting them up. As a process still bound to the \u0026ldquo;classic\u0026rdquo; experience, we would spend some time navigating to the Application Users view, selecting to create a new record and populating the form with crucial details such as the Application (Client) ID taken from the Application Registration in Azure. A few steps involved but hardly something that would make a massive dent in your daily work provided you knew where to go. 😉\nHowever, you may notice we suddenly no longer have access to this view when we navigate into the classic interface:\nThis is because we now have a new way of creating and managing these, which is now available via the Power Platform Admin Center. This article goes into further details, with the changes described appearing to have been rolled out recently (on or around mid-February 2022). Rather strangely (or annoyingly, if you are glass half full kind of person), there doesn\u0026rsquo;t appear to have been much fanfare around the announcement, and it doesn\u0026rsquo;t even seem to get a mention as part of the 2022 release wave 1 plan. Notwithstanding this fact, I believe this is a change we can welcome. To start working with Application Users in the new way, we should navigate first into the Settings for our Dataverse environment, where we should now find ourselves greeted with a new option under the Users + Permissions area:\nI like this new experience because it simplifies the whole effort involved. Instead of having to log down the Application (Client) ID, the Platform will automatically return a list of all applicable Application Registrations you have access to on the Azure side:\nI suspect this does mean that if you don’t have access to the Application Registration, it will now be impossible for you to add this in yourself. So you may need to work with your AD Administrator to ensure that they complete these steps for you.\nUpdates that gradually wean ourselves off our dependency on the classic interface can only be embraced welcomingly. However, it is pretty frustrating that changes like this seem to be utterly non-relevant from an announcement standpoint. Yet, a change pertaining to a colour change on a seemingly random Dataverse component is put forward for all and sundry to digest. And by this, I don\u0026rsquo;t mean to trivialise the importance this UI change has from an accessibility standpoint; it\u0026rsquo;s just confusing that a change like the one being discussed in this very blog post seems to be completely unworthy of mention. But anyway, enough griping - if you weren\u0026rsquo;t aware before, now hopefully you are, and you can save yourselves 10-15 minutes of confusion wondering what has gone on. 🙂\n","date":"2022-03-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-curious-case-of-the-missing-application-user-view-dynamics-365-microsoft-dataverse/","title":"The Curious Case of the Missing Application User View (Dynamics 365 / Microsoft Dataverse)"},{"content":"In today\u0026rsquo;s post, we will be taking a lot at some new functionality that I had the fortunate opportunity to play around with a short time ago. If you\u0026rsquo;re looking to enhance the way you store and manage log information relating to your Microsoft Dataverse plug-ins, then be sure to read on. 😀\nSo What\u0026rsquo;s the Deal? In the past, developers of plug-ins have been limited to storing telemetry information directly within Microsoft Dataverse via the capabilities on offer as part of trace logging. While this offers us the ability to debug and detect potential problems arising from any custom code we’ve written, attempting to work with this information outside of Dataverse has typically been challenging, if not impossible. To help address this need, it is now possible for developers to output detailed logging and telemetry information into Application Insights as well. This offers several distinct benefits, including:\nThe ability to easily extract and view this information via tools such as Power BI. Support for generating alert rules and custom actions, such as triggering a Logic App or Azure Function. No degradation in the quality of the outputted information; developers can expect to receive and work with the exact details currently available as part of trace logging. Combined together, this capability affords new opportunities for organisations to more closely integrate their Power Platform solution with any existing monitoring capabilities that may already exist within their Microsoft Azure environment.\nNow, it\u0026rsquo;s worth noting that all of this stuff is still currently in public preview, having been initially released in this manner in July last year as part of the Wave 1 2021 updates. Therefore, it’s recommended not to use this yet for any production scenarios; however, it represents an excellent opportunity for developers to review what\u0026rsquo;s on offer and take steps to consider using it in the future.\nGetting Started First, make sure you have an Application Insights resource stood up on Microsoft Azure, ideally within the same region as your Dataverse environment. From there, you will then need to navigate into the Power Platform Admin Center and navigate into the Data Export (preview) area:\nThen, select New Data export:\nOn the New data export pane, select the CDS diagnostics and performance option and choose the environment you want to configure the integration for:\nThen, on the final screen, select the Application Insights resource you created earlier and select Create - provided you have the relevant access, all possible Azure subscriptions/resource groups/resources will appear in the appropriate dropdowns:\n(It\u0026rsquo;s worth noting here that there is a strict 1:1 binding for Environment and Application Insights resource. Therefore, you may need to set up additional Application Insights resources to accommodate this restriction.)\nAnd that\u0026rsquo;s everything setup successfully, which we can confirm by navigating to the App Insights tab:\nPretty straightforward, I\u0026rsquo;m sure you\u0026rsquo;ll agree - provided you know where to find it, of course. 😉\nNext, we need to implement a plug-in that uses the ILogger Interface. This is included as part of the System.Net.Http assembly, which should be added automatically as part of any new class assembly project; otherwise, add it in from NuGet. From there, we can start to think about using this in our code. Let\u0026rsquo;s assume our plug-in project is called JJG.Plugins.AppInsightsSample. First, create a folder in the project called Helper and then add in the following custom class:\nusing Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.PluginTelemetry; using System; namespace JJG.Plugins.AppInsightsSample.Helper { /// \u0026lt;summary\u0026gt; /// Class used to consolidate all Tracing / Logger calls into a single action. /// \u0026lt;/summary\u0026gt; public class Logger { private readonly ITracingService tracer; private readonly ILogger logger; public Logger(IServiceProvider serviceProvider) { tracer = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); logger = (ILogger)serviceProvider.GetService(typeof(ILogger)); } /// \u0026lt;summary\u0026gt; /// Logs information out to the Trace Log and Application Insights resource for the current environment. /// \u0026lt;/summary\u0026gt; /// \u0026lt;param name=\u0026#34;log\u0026#34;\u0026gt;The actual message to log out.\u0026lt;/param\u0026gt; public void LogInformation(string log) { logger.LogInformation(log); tracer.Trace(log); } /// \u0026lt;summary\u0026gt; /// Logs an error out to the Trace Log and Application Insights resource for the current environment. /// \u0026lt;/summary\u0026gt; /// \u0026lt;param name=\u0026#34;log\u0026#34;\u0026gt;The actual message to log out.\u0026lt;/param\u0026gt; /// \u0026lt;param name=\u0026#34;log\u0026#34;\u0026gt;Exception details to log out into Application Insights.\u0026lt;/param\u0026gt; public void LogError(string log, Exception ex) { logger.LogError(ex, log); tracer.Trace(log); } } } The class acts as a means of simplifying any calls out to the Logger and the plug-in trace log, as and when we need to. I would advise always writing out to both destinations so that any relevant information from a debugging standpoint is also available within Dynamics 365.\nNext, we\u0026rsquo;ll need to add in a plug-in that performs some action. In this case, we have a plug-in that goes off and creates a couple of Task rows on a Case, provided that the source of the Case does not equal Facebook:\nusing System; using System.ServiceModel; using Microsoft.Xrm.Sdk; using JJG.Plugins.AppInsightsSample.Helper; using System.Collections.Generic; namespace JJG.Plugins.AppInsightsSample { public class Case_OnUpdate_PushAppInsightsSample : IPlugin { private int facebookCaseOrigin = 2483; public void Execute(IServiceProvider serviceProvider) { //Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); // Check for EntityType and Message supported by your Plug-In if (context.MessageName != \u0026#34;Update\u0026#34; \u0026amp;\u0026amp; context.PrimaryEntityName != \u0026#34;incident\u0026#34;) throw new InvalidPluginExecutionException($\u0026#34;Plug-In {this.GetType()} is not supported for message \u0026#34; + $\u0026#34;{context.MessageName} of {context.PrimaryEntityName}\u0026#34;); //Initialize Tracing / Logger for Application Insights - custom class used to simplify the calls Logger logger = new Logger(serviceProvider); logger.LogInformation(\u0026#34;Tracing / Logging implemented successfully!\u0026#34;); try { //For the sample, just create a couple of Tasks on the Case and output details to logging. //Also simulate an error scenario, based on the Case Origin value Entity incident = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; Entity postIncident = context.PostEntityImages[\u0026#34;PostImage\u0026#34;]; logger.LogInformation($\u0026#34;Case ID: {incident.Id}\u0026#34;); OptionSetValue incidentOrigin = postIncident.GetAttributeValue\u0026lt;OptionSetValue\u0026gt;(\u0026#34;caseorigincode\u0026#34;); logger.LogInformation($\u0026#34;Origin (Value): {incidentOrigin.Value}\u0026#34;); if (incidentOrigin.Value == facebookCaseOrigin) { string error = $\u0026#34;Invalid Case Origin specified. Facebook Origin\u0026#39;s are not allowed.\u0026#34;; throw new InvalidPluginExecutionException(error); } else { logger.LogInformation($\u0026#34;Creating Tasks for Case ID {incident.Id}\u0026#34;); //Get a reference to the Organization service. IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = factory.CreateOrganizationService(context.UserId); List\u0026lt;int\u0026gt; taskNumbers = new List\u0026lt;int\u0026gt;() { 1,2,3,4 }; foreach(int number in taskNumbers) { logger.LogInformation($\u0026#34;Processing Task # {number}...\u0026#34;); Entity task = new Entity(\u0026#34;task\u0026#34;); task[\u0026#34;subject\u0026#34;] = $\u0026#34;Case Task # {number}\u0026#34;; task[\u0026#34;scheduledend\u0026#34;] = DateTime.UtcNow.AddDays(7); task[\u0026#34;regardingobjectid\u0026#34;] = new EntityReference(\u0026#34;incident\u0026#34;, incident.Id); Guid taskID = service.Create(task); logger.LogInformation($\u0026#34;Task ID {taskID} created successfully.\u0026#34;); } logger.LogInformation($\u0026#34;Plugin {this.GetType()} execution successfully completed!\u0026#34;); } } catch (InvalidPluginExecutionException pex) { logger.LogError(pex.Message, pex); throw; } catch (FaultException\u0026lt;OrganizationServiceFault\u0026gt; fex) { string message = fex.Detail.Message; string finalMessage = $\u0026#34;An error occurred in the {this.GetType().FullName}:{Environment.NewLine}{message}{Environment.NewLine}{fex}\u0026#34;; logger.LogError(finalMessage, fex); throw new InvalidPluginExecutionException(finalMessage, fex); } catch (Exception ex) { string finalMessage = $\u0026#34;An error occurred in the {this.GetType().FullName}:{Environment.NewLine}{ex}\u0026#34;; logger.LogError(finalMessage, ex); throw new InvalidPluginExecutionException(finalMessage, ex); } } } } This plug-in requires a post-operation step that should be registered on Update of any column on the Case table (it doesn\u0026rsquo;t matter which one). The plug-in step will also require a Post Image configuring, as indicated below:\nReviewing the Logs With logging enabled at the environment level, what we will find is that our Application Insights logs start filling up with execution details for every plug-in within the environment, alongside additional information relating to user requests, API calls and other related telemetry. We, therefore, need to write particular sets of queries to retrieve the information we want. The appropriate object types available to us include:\ndependencies: Stores details of all backend SDK operations executed in the environment, such as RetrieveMultiple, Update etc. exceptions: All custom / unhandled errors will be logged into here pageViews: Unrelated to our current purposes, we can use this object to see all actions carried out within a model-driven app by each user(s). requests: This object will log all Web API and Organisation Service Requests made against the environment. traces: Within here, every single message logged out using the LogInformation method will appear - one row for every single \u0026ldquo;trace\u0026rdquo;. To help get you started, I\u0026rsquo;ve provided a couple of examples below, alongside an indication of the results we can expect to receive. To execute these, make sure we are in the Logs area of our Application Insights resource:\nReturn All Errors for Specific Plug-in exceptions | order by timestamp | project timestamp, cloud_RoleInstance, customDimensions[\u0026#34;FormattedMessage\u0026#34;], problemId, [\u0026#39;type\u0026#39;], assembly, method, outerType, outerMessage, outerAssembly, outerMethod | where cloud_RoleInstance contains \u0026#34;SandboxRoleInstance\u0026#34; and method == \u0026#34;JJG.Plugins.AppInsightsSample.Case_OnUpdate_PushAppInsightsSample\u0026#34; Get All Traces for Specific Plug-in Execution traces | project timestamp, message, itemType, customDimensions, operation_Id, session_Id | where operation_Id == \u0026#34;918c491a-ffd6-4820-a432-3c332ed21248_9d7f35ed-376f-49a7-afa5-05bdc3d88ffe\u0026#34; Get All Traces for All Failed Plug-in Executions dependencies | where type == \u0026#34;Plugin\u0026#34; and name == \u0026#34;JJG.Plugins.AppInsightsSample.Case_OnUpdate_PushAppInsightsSample\u0026#34; and success == false | project operation_Id, timestamp, target | join (traces | project operation_Id, message ) on $left.operation_Id == $right.operation_Id | order by timestamp Further Resources Below are some links to the official Microsoft Docs on this subject - well worth a read, and I\u0026rsquo;m grateful to these posts in helping to put together this blog post:\nWrite Telemetry to your Application Insights resource using ILogger (Preview) Preview: Analyze model-driven apps and Microsoft Dataverse telemetry with Application Insights Preview: Set up exporting to Application Insights What do you think of this new functionality? Beneficial or too much hassle to work with? Let me know your thoughts in the comments below!\n","date":"2022-03-06T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/getting-started-with-microsoft-dataverse-plug-in-logging-in-application-insights/","title":"Getting Started with Microsoft Dataverse Plug-in Logging in Application Insights"},{"content":"Changes in Microsoft certification offerings is to be expected, primarily because we are working with platforms, products and services that are continually evolving in our cloud-first world. A few years back, we saw a significant change with the introduction of role-based certifications, first affecting Microsoft Azure and then rolled out more generally across Microsoft 365 and Business Applications. Microsoft also put concerns around having to sit and pay for exam attempts every 1-2 years to bed, thanks to the introduction of minimum validity periods for each certification and a corresponding, free online renewal assessment to maintain a certification. This change is welcome, particularly for those who run smaller partners (like me 😉) who can\u0026rsquo;t necessarily afford the cost for continual exam attempts at £113 a pop. And also, from a learning standpoint, it\u0026rsquo;s vital that technical professionals only need to brush up on what\u0026rsquo;s new since they last passed an exam, rather than retreading old ground.\nGiven this background and the commitments made, it was interesting to see the introduction of a replacement exam for the current DA-100: Analyzing Data with Microsoft Power BI exam. Effective tomorrow, we will have a new exam, PL-300: Microsoft Power BI Data Analyst, introduced. Changes like these are always worth a bit of dissection, as they can impact aspiring candidates and many partner organisations across the globe who rely on these certifications and teach Microsoft Official Courses (MOCs) aligned to each exam. Let\u0026rsquo;s dive in to see how big of a change we are talking about here and whether there\u0026rsquo;s anything to worry about\u0026hellip;\nSo What\u0026rsquo;s Changed? Whenever we want to see \u0026ldquo;what\u0026rsquo;s up\u0026rdquo; with a new exam, the first port of call is the Skills Measured document, which provides a breakdown of all subject areas that the exam will cover. At a high level, the table below illustrates the fundamental difference between both exams, in terms of subjects and - crucially - weighting (i.e. the expected number of questions we may receive):\nDA-100 PL-300 Prepare the Data (20-25%) Prepare the Data (15-20%) Model the Data (25-30%) Model the Data (30-35%) Visualize the Data (20-25%) Visualize and Analyze the Data (25-30%) Analyze the Data (10-15%) Deploy and Maintain Assets (20 - 25%) Deploy and Maintain Deliverables (10-15%) So broadly speaking, not much has changed. The main effort seems to have been to consolidate the number of areas, which explains why the Analyze and Visualize topics have been condensed down to one. As a result, there is now greater coverage of all topics in the exam. Other subjects have been dropped and moved into other areas instead. For example, in the topic area titled Profile the data has been dropped entirely from PL-300 and merged into Clean, transform and load the data instead. This is a minor subject area in the grander scheme of things but does touch upon some helpful functionality that forms part of Power Query\u0026rsquo;s data profiling tools. Altogether, the effort seems to have been more towards a general re-organisation of the exam structure instead of driving to replace large portions and drop others entirely.\nHands Off Data Platform, Power BI is Power Platform Now It\u0026rsquo;s a minor thing of note, but the change of the exam code from DA to PL does now very clearly place Power BI firmly within the Power Platform, at least from a learning and marketing standpoint. It has felt over the years that Data Platform and Power Platform have been angry parents, fighting over who has the responsibility for poor little Power BI. 🤣 Perhaps with this change, we now know which parent has won out.\nHow to Earn the New Microsoft Certified: Power BI Data Analyst Associate Certification Alongside introducing the new exam, Microsoft has chosen to rename the existing Microsoft Certified: Data Analyst Associate to Microsoft Certified: Power BI Data Analyst Associate, as announced in a blog post last month. This particular change is welcome, as it means that those who have previously passed the DA-100 certification will not need to sit the new exam; instead, the name should automagically appear on your transcript once the changes go live. And, as I\u0026rsquo;ve been told, all existing holders of the certification will go through the same renewal process via an online assessment (i.e. there will be no future requirement to sit PL-300 to maintain your certification). So, in summary, you have three ways in which you can earn this certification:\nBy doing simply nothing, if you\u0026rsquo;ve passed DA-100 already. Sit and take the DA-100 exam before its expiry on March 31st 2022. Sit and take the new PL-300 exam after it goes live tomorrow (February 28th 2022) To Switch or Not to Switch? There\u0026rsquo;s a good chance that you may be preparing or have attended a DA-100 course and are considering switching to the new exam instead. My advice here is that if you’re sitting the exam on or before March 31st, proceed as planned. As we\u0026rsquo;ve noted already, the result will still be the same, and you can be confident that the current curriculum will get you into a ready state to attain a passing score. It’s also worth noting that new exams, particularly those in beta, can sometimes be a bit of a rocky road for new learners to grapple with, as Microsoft may not have addressed all errors. An interesting thing to note is that the current exam page for PL-300, compared to other newly released exams, doesn\u0026rsquo;t make any reference to the exam being released initially in beta:\nThis may suggest that the new exam will not go through this formal cycle and that, content/question wise, it will be operating off the same bank that DA-100 is currently using. I\u0026rsquo;m purely speculating here, so don’t make any lazy assumptions.\nOne outstanding issue about switching from DA-100 to PL-300, which I\u0026rsquo;m not sure about (answers on a postcard if you know), is if candidates currently booked for a DA-100 exam on, say, March 21st can then look to rebook for PL-300 on April 6th. This could be important, as there\u0026rsquo;s a potential for candidates to lose money or the validity of any free voucher they\u0026rsquo;ve been given. To avoid this issue, I\u0026rsquo;d recommend attempting the exam at some stage in March, at the latest date you are most comfortable with.\nConclusions or Wot I Think To summarise, the overall change here seems to be more of rebadging and marketing exercise, more than anything. Existing candidates should, I hope, have little to fear as part of their journey, and I fully expect that the new exam and resulting MOC course will be virtually indistinguishable from what we have available currently. Hopefully, this should provide some satisfaction if you worry about these changes. To quote an excellent British phrase, it very much looks like we can Keep Calm and Carry On and, indeed, if you have your DA-100 exam booked for sometime in March, go ahead as planned.\nAre you planning to sit the new PL-300 exam? What do you think about these latest changes? Let me know your thoughts in the comments below!\n","date":"2022-02-27T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/da-100-versus-pl-300-reviewing-the-new-power-bi-data-analyst/","title":"DA-100 versus PL-300: Reviewing the new Power BI Data Analyst Certification"},{"content":"One of the topics I get asked about frequently by those taking or exploring Microsoft certification for the first time is exam questions and whether any samples are available. For a long time, this has been a potentially taboo topic. Per the Microsoft Exam Non-Disclosure Agreement, you have to agree to when sitting an exam, discussion of such a topic would put the offending individual at severe risk of getting banned from taking an exam again in the future. As I\u0026rsquo;m sure you\u0026rsquo;ll agree, this is a reasonable step to ensure that the integrity of the exam process is protected from unscrupulous individuals. So the best you could hope for is to purchase a MeasureUp practice test to get a feel for how difficult the exam may be and the types of questions you may expect. Unfortunately, not everyone has the money available to purchase these, putting particular candidates at a disadvantage compared to others. Worst, it could perhaps force them to go down more dangerous routes, such as using brain dumps (which you should never, ever use, by the way).\nWhether there has been a change of opinion on this topic or perhaps in an attempt to put everyone on equal footing, I was surprised to notice the other day that all of the currently available Fundamentals exams now have a link to some free sample questions:\nProspective candidates are free to use these as they see fit as part of their revision process. You can access the respective sample question list for each exam by using the links below:\nExam AI-900: Microsoft Azure AI Fundamentals Exam AZ-900: Microsoft Azure Fundamentals Exam DP-900: Microsoft Azure Data Fundamentals Exam MB-910: Microsoft Dynamics 365 Fundamentals (CRM) Exam MB-920: Microsoft Dynamics 365 Fundamentals (ERP) Exam MS-900: Microsoft 365 Fundamentals Exam PL-900: Microsoft Power Platform Fundamentals Exam SC-900: Microsoft Security, Compliance, and Identity Fundamentals As Microsoft notes clearly, these questions will bear little or no resemblance to what you will see on the exam itself (for the reasons stated previously). Still, they act as an inexpensive means of gauging your readiness for a particular exam. Personally speaking, I don\u0026rsquo;t benefit much from example questions as a revision tool when preparing for an exam. But each to their own, I guess. 😄 It will also be interesting to see whether other, more challenging exams get the same treatment in the future. You can provide feedback directly to Microsoft if you think this is a good idea and would like to see more of this in the future.\nWhat are your thoughts on this change? Will it make it easier for you to contemplate sitting any of these exams in the future? Let me know what you think in the comments below!\n","date":"2022-02-20T00:00:00Z","image":"/images/Microsoft-FI.png","permalink":"/accessing-free-sample-questions-for-microsoft-fundamentals-exams/","title":"Accessing Free Sample Questions for Microsoft Fundamentals Exams"},{"content":"Suppose an organisation uses Dynamics 365 Sales and wants to dip its toe into the proverbial Power BI pool. In that case, there\u0026rsquo;s a good chance that they will first take a look at the following template apps that are available via AppSource:\nSales Analytics for Dynamics 365 Sales Process Analytics for Dynamics 365 These apps provide a quick way to start working with your Dynamics 365 Sales data within Power BI and showcase some of the product\u0026rsquo;s core capabilities. However, imagine that our business has strayed a little too far away from native functionality within the application, perhaps via extensive customisation. In this situation, you could find these apps challenging to work with or wonder why your data is not surfacing correctly. This is one of the critical drawbacks of simply installing these apps straightaway into our environment. As a result, their potential utility extends only for demonstrations and deployments that adhere very rigidly to how Dynamics 365 Sales works natively. Although we talk a lot about this being our end goal when deploying the solution, I\u0026rsquo;ve yet to see a situation where a business is 100% using the app as intended. 😉\nSo what happens if we are super duper keen on using the Sales Analytics or Process Analytics reports and have a need to customise them ourselves? Unfortunately, we cannot extract the underlying PBIX files from the installed apps within Power BI Online. Instead, what we can do is consult the following Microsoft Docs article and download these ourselves to work with:\nSales Analytics for Dynamics 365 Sales Process Analytics for Dynamics 365 Once downloaded, we are free to customise these as we see fit, perhaps by including any additional columns/tables we are using or by adding different visualisation types that may address our requirements. There are a couple of things to note when working with these template files:\nThe files all use the OData connector targeting the Microsoft Dataverse Web API instead of the legacy or revamped Dataverse connector. The OData connector won\u0026rsquo;t support Direct Query capability or other functionality we expect from the alternate connectors; you would need to make the manual modifications to switch over to these if required. If you\u0026rsquo;re wondering why no data loads when you open the file for the first time, make sure you go into the Power Query Editor and supply the correct URL for your environment via the D365_Sales_URL parameter: From there, select one of the tables (such as Account) to provide your login details, if needed.\nAll connections made into your Dynamics 365 environment are done via the version 8.2 Web API endpoint instead of the current 9.2 version. My recommendation would be to migrate towards the latest version, if possible. Other than that, the files should be pretty straightforward for anyone with experience working with Power BI Desktop to pick up and customise further. I was both surprised and thankful that Microsoft has made these templates available in this manner. This will come in handy in the future, for more complex demos or for where we need to make a minor modification to satisfy a customer requirement. 😁 Here\u0026rsquo;s hoping these templates are updated in the future to migrate towards the modern Dataverse connector and setup to leverage Direct Query instead. 🤞\n","date":"2022-02-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/customising-the-dynamics-365-sales-power-bi-template-apps/","title":"Customising the Dynamics 365 Sales Power BI Template Apps"},{"content":"I\u0026rsquo;ve been doing some serious playing around with the Microsoft Power Platform CLI recently. It\u0026rsquo;s come a long way. Previously, this tool was primarily used as our main companion when building out Power Apps Component Framework (PCF) controls. Today, the CLI has been enhanced in several key areas, increasing its flexibility and establishing itself as a less cumbersome option than the traditional SDK tools that longstanding Dynamics CRM / Dynamics 365 developers may be accustomed to working with. This week in the blog, I wanted to dive into a specific area of this tooling that we can use to support our Application Lifecycle Management (ALM) processes involving Microsoft Dataverse - namely, solution settings files.\nWhat is a Solution Settings File? For anyone familiar with importing solutions manually, we\u0026rsquo;ll recall that we\u0026rsquo;re typically asked to specify Environment Variable values and Connection Reference details whenever we import a solution:\nSpecifying these settings is crucial, as they will dictate how our various apps and automations operate. Therefore, it\u0026rsquo;s natural that we would want a mechanism to specify these in all deployment scenarios, particularly ones that involve Azure DevOps and the Power Platform Build Tools. Previously, we could not set these, meaning that our solution deployments potentially ended up within an undesirable state, requiring manual intervention and, in some cases, the creation of unintentional unmanaged layers on our various solution components. With a settings file, we can now non-interactively specify the settings outlined in the screenshot above. To better understand how settings files work, let\u0026rsquo;s dive into an end-to-end scenario, where we demonstrate how to generate and leverage them.\nExport Solution The first step before kicking things off is to export a suitable solution from our environment. For this example, we are working with a solution that contains a couple of Environment Variables and Connection References:\nWe can either export the solution manually from the maker portal or do this via a CLI command instead. First, let\u0026rsquo;s set up an authentication profile and ensure we set it as our default:\npac auth create -n CLIDemo -u https://mydataverseorg.crm4.dynamics.com/ pac auth select -n CLIDemo You will be prompted to log in via an interactive prompt. Once complete, we\u0026rsquo;ll be able to run the following command to export our solution as managed:\npac solution export -p C:\\MySolutionFolder -n MySolution -m true Generate the File With a solution file to work with, we can now proceed to generate the settings file via the following command\npac solution create-settings -z C:\\MySolutionFolder\\MySolution.zip -s C:\\MySolutionFolder\\MySettings.json What\u0026rsquo;s nice about this command is that we can also run against the expanded contents of our solution. Let\u0026rsquo;s assume we\u0026rsquo;ve done that via the following command:\npac solution unpack -z C:\\MySolutionFolder\\MySolution.zip -f C:\\MySolutionFolder\\MySolution\\ -p Managed We can then make some slight changes to our create-settings command, as indicated below:\npac solution create-settings -f C:\\MySolutionFolder\\MySolution\\ -s C:\\Users\\joejg\\Downloads\\MySettings.json Regardless of which route we go down, the result will be a file similar to this:\nEvaluating the File The file is a simple JSON document which, based on the example components below, would resemble the following:\n{ \u0026#34;EnvironmentVariables\u0026#34;: [ { \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_DecimalEV\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;\u0026#34; }, { \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_TextEV\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;\u0026#34; }, { \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_YesNoEV\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;\u0026#34; } ], \u0026#34;ConnectionReferences\u0026#34;: [ { \u0026#34;LogicalName\u0026#34;: \u0026#34;jjg_AzureBlobStorage\u0026#34;, \u0026#34;ConnectionId\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;ConnectorId\u0026#34;: \u0026#34;/providers/Microsoft.PowerApps/apis/shared_azureblob\u0026#34; }, { \u0026#34;LogicalName\u0026#34;: \u0026#34;jjg_Dataverse\u0026#34;, \u0026#34;ConnectionId\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;ConnectorId\u0026#34;: \u0026#34;/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps\u0026#34; } ] } The critical elements of the file include:\nA list of all the Environment Variables found in the solution All of the Connection References contained in the solution, with details for the relevant API\u0026rsquo;s they relate to The file is designed to be edited and allows us to specify what we desire the current values of our Environment Variables need to be\u0026hellip;\n{ \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_DecimalEV\u0026#34;, \u0026#34;Value\u0026#34;: 12.5 }, { \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_TextEV\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;Sample Text\u0026#34; }, { \u0026#34;SchemaName\u0026#34;: \u0026#34;jjg_YesNoEV\u0026#34;, \u0026#34;Value\u0026#34;: true } \u0026hellip;and likewise, for our Connection References. The steps here are a little bit more involved, though. It\u0026rsquo;s expected the relevant Connections for our Connection References already exist in the target environment. So, in this case, let\u0026rsquo;s assume I have the following two connections set up for Dataverse and Azure Blob Storage:\nWe need to navigate into each of these and then extract out the URL, which will resemble the following:\nhttps://make.powerapps.com/environments/752c0a39-e2eb-4140-9ce3-31a65228b3b2/connections/shared_azureblob/a0b52ba3bd6b420c92bd4826e4601658/details# The piece of information we\u0026rsquo;re interested in is contained in the penultimate portion of the URL, between /shared_azureblob and /details#, which we can then add onto the ConnectionId property accordingly:\n{ \u0026#34;LogicalName\u0026#34;: \u0026#34;jjg_AzureBlobStorage\u0026#34;, \u0026#34;ConnectionId\u0026#34;: \u0026#34;a0b52ba3bd6b420c92bd4826e4601658\u0026#34;, \u0026#34;ConnectorId\u0026#34;: \u0026#34;/providers/Microsoft.PowerApps/apis/shared_azureblob\u0026#34; } Using the Settings File During a Deployment As we\u0026rsquo;ve discussed already, the settings file\u0026rsquo;s main purpose is to help in automation scenarios. This means we can leverage it when importing a solution via the CLI:\npac solution import -p C:\\MySolutionFolder\\MySolution.zip -s C:\\MySolutionFolder\\MySettings.json However, the most typical usage scenario will be from an Azure DevOps perspective. As part of the Import Solution task, we\u0026rsquo;ll see we\u0026rsquo;ve got an option to indicate we plan to use our settings file, with a link to where it\u0026rsquo;s located in our build artefact:\nAfter our import is complete, we can observe that our Environment Variables have the desired current values and that our Connection References are linked to our desired connection.\nConclusion or Wot I Think Settings files undoubtedly provide a crucial piece of the puzzle when implementing healthy and automated ALM involving the Power Platform. I have a gripe that there is still no (easy) way to programmatically create the underlying connections that we can then use as part of our Connection References. Granted, many connection types will always require some form of interactive authentication. Still, this gap does mean we have to do a degree of preparation against our targeted environments to ensure our automation works as expected. And, when it comes to weighing up between using Power Automate cloud flows or Azure Logic Apps, this gap does weigh towards us considering the latter option, as this is typically not a problem we face. Nevertheless, I would encourage organisations to leverage settings files as part of their automated DevOps deployments to begin working smarter, not harder. 😉\n","date":"2022-02-06T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/generating-solution-settings-file-via-the-microsoft-power-platform-cli/","title":"Generating Solution Settings File via the Microsoft Power Platform CLI"},{"content":"Suppose you have a sizeable investment within Microsoft Azure and, as part of this, you\u0026rsquo;re authenticating between different services within the platform. In that case, it\u0026rsquo;s essential to leverage Managed Identities as much as possible. They provide several distinct yet valuable benefits compared to different authentication approaches typically made available to us. For example:\nThey negate the need for us to maintain, recycle and put at risk credentials to access our different services on Azure; instead, the platform seamlessly handles authentication from one service to the other, with no need to perform additional authentication or authorization. They support full compatibility with Azure Role-Based Access Controls (RBAC), thereby allowing us to assign them targeted roles against our resource(s) in much the same way as a standard user account. When leveraging system-assigned Managed Identities, Microsoft will automatically handle the persistence and eventual removal of the identity should we choose to delete the resource in the future. Like all the best things in life, there is no additional cost for their usage across all supported resource types. Typical scenarios where I like to leverage them include having Azure Data Factory access a specific blob container or an Azure resource with a dedicated service account on a SQL Database, with the correct set of privileges assigned. There are many diverse usage scenarios for Managed Identities, and developers have no good excuse not to explore their potential usage when building out their solution.\nI say explore very deliberately here. 😏 There will be some scenarios where they are not supported, or we face potential difficulty rolling them out. I had this very issue recently when I was attempting to deploy a Logic App leveraging a SQL Server Managed Identity connection. As this functionality is still in preview, it\u0026rsquo;s not unreasonable to expect that we may encounter some difficulties with it. The main challenge was defining the correct configuration within my Resource Manager (RM) template file. After some trial, error and poking around within the REST API, the solution turned out to be specific parameterValueSet values that we had to define when creating the appropriate Microsoft.Web/connections resource. Below is an example of how this needs to look:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/connections\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-06-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MySQLServerConnection\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;SQL Server API Connection\u0026#34;, \u0026#34;customParameterValues\u0026#34;: {}, \u0026#34;api\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;[concat(\u0026#39;/subscriptions/\u0026#39;, subscription().subscriptionId, \u0026#39;/providers/Microsoft.Web/locations/\u0026#39;, resourceGroup().location, \u0026#39;/managedApis/sql\u0026#39;)]\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/locations/managedApis\u0026#34; }, \u0026#34;parameterValueSet\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;oauthMI\u0026#34;, \u0026#34;values\u0026#34;: {} } }, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;SQL Server API Connection\u0026#34; } } From there, we can then look to leverage this as part of our Logic App in the following manner:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Logic/workflows\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2017-07-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyLogicApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Sample Logic App with SQL Server MI Connection\u0026#34;, }, \u0026#34;identity\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;SystemAssigned\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;state\u0026#34;: \u0026#34;Enabled\u0026#34;, \u0026#34;definition\u0026#34;: { \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;$sql_ServerName\u0026#34;: { \u0026#34;defaultValue\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34; } }, \u0026#34;triggers\u0026#34;: { //TODO: Insert trigger action here... }, \u0026#34;actions\u0026#34;: { \u0026#34;Sample_SQL_Action\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;body\u0026#34;: { \u0026#34;MyColumn1\u0026#34;: \u0026#34;Test123\u0026#34;, \u0026#34;MyColumn2\u0026#34;: \u0026#34;Test123\u0026#34; }, \u0026#34;host\u0026#34;: { \u0026#34;connection\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;@parameters(\u0026#39;$connections\u0026#39;)[\u0026#39;sql\u0026#39;][\u0026#39;connectionId\u0026#39;]\u0026#34; } }, \u0026#34;method\u0026#34;: \u0026#34;post\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/v2/datasets/@{encodeURIComponent(encodeURIComponent(parameters(\u0026#39;$sql_ServerName\u0026#39;)))},@{encodeURIComponent(encodeURIComponent(\u0026#39;MyDatabase\u0026#39;))}/tables/@{encodeURIComponent(encodeURIComponent(\u0026#39;[dbo].[MyTable]\u0026#39;))}/items\u0026#34; }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;ApiConnection\u0026#34; } }, \u0026#34;outputs\u0026#34;: {} }, \u0026#34;parameters\u0026#34;: { \u0026#34;$connections\u0026#34;: { \u0026#34;value\u0026#34;: { \u0026#34;sql\u0026#34;: { \u0026#34;connectionId\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Web/connections\u0026#39;, variables(\u0026#39;MySQLServerConnection\u0026#39;))]\u0026#34;, \u0026#34;connectionName\u0026#34;: \u0026#34;MySQLServerConnection\u0026#34;, \u0026#34;connectionProperties\u0026#34;: { \u0026#34;authentication\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;ManagedServiceIdentity\u0026#34; } }, \u0026#34;id\u0026#34;: \u0026#34;[concat(\u0026#39;/subscriptions/\u0026#39;, subscription().subscriptionId, \u0026#39;/providers/Microsoft.Web/locations/\u0026#39;, resourceGroup().location, \u0026#39;/managedApis/sql\u0026#39;)]\u0026#34; } } }, \u0026#34;$sql_ServerName\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;[concat(variables(\u0026#39;MySQLResourceName\u0026#39;), \u0026#39;.database.windows.net\u0026#39;)]\u0026#34; } } } } The critical thing to remember here is to ensure that we don\u0026rsquo;t miss out on the identity options on the Logic App resource. This is the crucial element that will enable the Managed Identity for our Logic App. From there, everything should deploy out cleanly and, after creating the appropriate EXTERNAL PROVIDER user account in our database, our Logic App will have little trouble getting into our database.\nDespite the extra legwork involved in getting this particular example working, this does not detract from my original assertion on the benefit of Managed Identities. Any time invested in coming up with this work-around was well invested. And ultimately, the result is a solution that reduces complexity by negating the need to store credentials in the Logic App itself or a Key Vault and makes it easier for us to manage the security of our database user accounts without any compromise involved. I\u0026rsquo;ll drink to that. 😉🍻\n","date":"2022-01-30T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/creating-a-sql-server-managed-identity-connection-in-azure-resource-manager-templates/","title":"Creating a SQL Server Managed Identity Connection in Azure Resource Manager Templates"},{"content":"Welcome to the sixth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Last time around, we dove into data modelling and what potential routes are open to us as solution architects when working with the Power Platform. This week, we move onto the next topic, which concerns integrations and how we can best approach designing them:\nDesign integrations\ndesign collaboration integrations design integrations between Microsoft Power Platform solutions and Dynamics 365 apps design integrations with an organization\u0026rsquo;s existing systems design third-party integrations design an authentication strategy design a business continuity strategy identify opportunities to integrate and extend Microsoft Power Platform solutions by using Microsoft Azure Although the Power Platform is billed primarily as a citizen developer platform, we will, at times, need to leverage pro-code extensibility or other \u0026ldquo;fusion\u0026rdquo; development approaches to help build out our solution. And as part of this, we will likely need to factor in one or several different integrations involving our existing business systems. Let\u0026rsquo;s unwind this in further detail, and look at some of the routes available to us to help meet the challenges that may arise.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nFostering Collaboration within the Power Platform When building any form of business application, the central aim is to ensure that the organisation can better understand the critical information points relating to our customers and core business. As part of this, we can look to foster better collaboration by ensuring that we equip staff members with the correct information at the right time. From there, we can have the appropriate discussions to celebrate a particularly successful quarter or to start figuring out why sales dropped during February. These \u0026ldquo;side-benefits\u0026rdquo; will naturally materialise as we begin to deploy our Power Apps, Power BI Reports, and other components. A solution architect does not necessarily need to do anything specifically to help realise this. However, it is helpful that we keep in mind some of the following key features across the Power Platform that we could guide the organisation to start using to improve collaboration further:\nMicrosoft Teams Integration: There are many ways in which we can easily embed our Power App, Power BI Reports/Dashboards and other components anywhere we want to within Microsoft Teams. If we\u0026rsquo;re starting out with the Power Platform, we can also look to leverage Dataverse for Teams to build out our initial solution and migrate these across into a full Dataverse environment when the time is right. By considering Microsoft Teams as part of our overall solution, we can make it easier for our users to carry out all of their crucial work from within a single user interface and allow them to message colleagues, arrange meetings or share files easily. Comments / Discussions: Each service within the Power Platform has its specific ways to achieve this, but we can, for example, very quickly add comments and have discussions targeting our Power BI Reports or Dashboards. This functionality allows us to contextualise our conversations better, and, in most cases, it will be as straightforward to use as comments in Microsoft Word. Server-Side Synchronization: When we want to integrate users Email mailboxes within Dataverse and perform synchronisation of emails, contacts and tasks, setting up this functionality achieves this aim and improves collaboration by ensuring all relevant information points are ultimately surfaced within Dataverse. As we\u0026rsquo;ve established previously in the series, Power Platform solution architects are very much subject matter experts, or SME\u0026rsquo;s, with a natural expectation that we can advise and encourage adoption of the above elements whenever we can establish a suitable usage case. Ensure we keep the above in the back of our minds whenever we want to promote further collaboration within our Power Platform solutions.\nIntegrating Dynamics 365 Customer Engagement with the Power Platform We\u0026rsquo;ve discussed the core features previously within each Dynamics 365 Customer Engagement app, and how we can factor these apps in as part of our overall solution, so there\u0026rsquo;s no point retreading old ground. Suffice to say, our journey and considerations when it comes to integrating alongside any of the Dynamics 365 Customer Engagement apps are simplified because they leverage many of the base components within the Power Platform. It becomes effortless for us to integrate both toolsets and trigger, for example, a Power Automate cloud flow on creating a new Dynamics 365 Sales Lead row. A solution architect may need to consider and plan contingency actions relating to how we use Dynamics 365 components and whether it may be appropriate to set up different environments to \u0026ldquo;contain\u0026rdquo; this functionality within a specific business area. Overall, these challenges shouldn\u0026rsquo;t present much difficulty in overcoming.\nTo get started with any Dynamics 365 Customer Engagement app, we need to fully appreciate the capabilities on offer as part of the Power Platform Admin Center. This portal provides us with a central location to manage each of our Dataverse environments, and, through here, we can look to install one or all of the different Dynamics 365 apps available to us. The only real constraint is to ensure we have the appropriate licenses provisioned on the tenant for the app(s) we want to install.\nDemo: Power Platform Admin Center Overview To help better understand how we can link together one (or several) Dynamics 365 Customer Engagement applications within our Power Platform solution, take a look at the video below, which provides a detailed walkthrough of what we can do within the Power Platform Admin Center:\nOutline Approaches to Integrations Anything involving some form of integration, from an IT standpoint, will typically include some form of collective moan. 😫 This reaction can be understandable. If you\u0026rsquo;ve worked on similar projects that I have, technical integrations very often are easier said than done and can end up being incredibly tedious to work with. The Power Platform is no different in this regard. Although we have tools at our disposal to help ease us on our journey, we will often find ourselves grappling with particularly tedious integrations. And as part of this, we may need to determine which approach to go down - do we fully (and exclusively) only leverage \u0026ldquo;out of the box\u0026rdquo; components, or do we invest in building a bespoke integration using custom code?\nUnder the first approach, we can work in a relatively brisk fashion and leverage all platform aspects to their fullest extent. In this regard, Power Automate becomes a natural tool for us to turn to, thanks to its wide array of connectors and its visual designer. In addition, we can look to empower our functional consultants or citizen developers to build out these integrations themselves and significantly reduce our reliance on custom code as part of the integration we are building out. This will reduce the technical cost of our solution and make it easier to maintain in the future, by ourselves and Microsoft as well. Despite this, integrations like these can lead to verbosity in our solutions and challenging for out-of-the-box components to cope with. It can also lead to our solution becoming borderline indecipherable to understand. I\u0026rsquo;ve seen my fair share of \u0026ldquo;spaghetti flows\u0026rdquo;, which take a whole day to digest and understand fully. Based on this, I feel qualified to make this assertion. 😉\nAlternatively, we can invest heavily into pro-code or \u0026ldquo;fusion\u0026rdquo; development efforts to implement our particular integration. Indeed, this may be our only course of action if we are working with a particularly troublesome integration involving a legacy application system or where we envision a significant amount of transformation effort for our data. Integrations of this nature tend to be a lot more formalised from an IT standpoint, thereby making them easier to include as part of any testing or Application Lifecycle Management (ALM) processes that we follow. The challenge with these integrations is that they can sometimes involve a helluva lot of custom code and bespoke development effort, making our overall solution more difficult to maintain. Therefore, it starts to become a tricky balancing act, and we could accidentally introduce further cost as part of what we\u0026rsquo;ve built out in future\nRegardless of the different approaches available to us, the solution architect\u0026rsquo;s primary concern is to maintain a careful balancing act. By this, I mean ensuring that we don\u0026rsquo;t fall too far into one pathway and that we cleverly leverage the platform\u0026rsquo;s specific capabilities based on the scenario we are faced with. For example, by using a custom connector, our pro-code developers can leverage their knowledge to build a connector that will work with our legacy application system and become something that our citizen developers can leverage as part of the apps and cloud flows they create. By thinking carefully and planning our integrations correctly, we can hopefully end up in a position where we are minimising bespoke development efforts and exploiting the capabilities of the Power Platform to its fullest extent.\nAuthentication Overview Let\u0026rsquo;s suppose our organisation does not currently leverage any other Microsoft 365 service or Azure Active Directory (AAD). In this situation, it could be our journey into the Power Platform provides our first opportunity to work with this identity platform. Consequently, we may need to consider and advise the business on the available capabilities and how they can be used to meet (and exceed) our expectations from an information security standpoint. The topic of AAD could be the subject of an entire blog series but, to summarise, this identity provider gives us the following benefits:\nOAuth 2.0 Support: As a thoroughly modern identity provider, AAD supports the latest version of the Open Authorization (OAuth) standard, providing a consistent and familiar standard for developers working with multiple cloud providers. Specifically for AAD, we also have support for flexible authentication flows that can leverage standard user accounts, service accounts or service principals. Multi-Factor Authentication: To help provide an additional security layer for all login attempts, administrators can require end-users to set up multi-factor authentication for their accounts. This will mandate that the user provides a second piece of information during their logon attempt - typically a one-time passcode sent to a mobile device. Users can look to install the Microsoft Authenticator app onto their mobile devices, which will allow them to approve or deny a particular login attempt. Conditional Access Policies: To provide an additional security layer, AAD administrators can configure different access policies for the various cloud services we implement. As part of this, we can look to block access to certain services based on a device\u0026rsquo;s location or, if it is outside the corporate network, mandate the use of MFA in these scenarios. There are several flexible options at our disposal. Federation / Single Sign On (SSO): Most applicable for scenarios where we have an existing, on-premise Active Directory forest, implementation of this will help to ensure that users are not continually prompted to sign-in to the Power Platform; instead, simply logging into a domain-joined device will be all that\u0026rsquo;s needed. A solution architect will need to advise implementing one, all, or additional features on offer as part of AAD to ensure that our Power Platform users can securely and straightforwardly access the applications built for them. At this stage, it may be appropriate for us to bring in the expertise of dedicated ADD consultants or an Azure solution architect to advise further; because let\u0026rsquo;s face it, we can\u0026rsquo;t expect to be domain experts on this topic as well. 😉 Therefore, don\u0026rsquo;t worry too much about grasping AAD in depth.\nEnsuring Business Continuity with the Power Platform As a fully managed, software-as-a-service (SaaS) platform, Microsoft will handle many aspects of the platform for us and provide us with a level of guarantee for business continuity in a severe outage or a disaster recovery scenario. Nothwithstanding these guarentees, there may still be steps that we, as solution architects, need to implement to ensure our solution continues to work if the unforeseeable occurs. There are several considerations that we should take into account here:\nIs it sufficient to rely on the system Dataverse backups that Microsoft performs on our behalf, or do we need to create our own manual backups on top of this? Using backups to restore our environment, potentially to a different region, will be something we need to plan for in a disaster recovery scenario. It may also be prudent to consider and implement backups to other locations, such as an on-premise location or another public cloud provider, but there may be some technical challenges to overcome if this is required (given that we can\u0026rsquo;t just download a backup of our Dataverse environment). Ensuring we have all aspects of our solution stored in source control, leveraging the approaches we\u0026rsquo;ve spoken about previously in the series, will allow us to perform \u0026ldquo;clean slate\u0026rdquo; restores, should the need arise. As part of this, we\u0026rsquo;ll need to consider restoring any configuration and business data. Typically, as part of disaster recovery planning, we need to work towards a specific recovery time objective (RTO), which provides the organisation with an indication of the amount of time it will take to get us back to normal. Ensuring that we communicate an accurate timescale here and, most crucially, that we\u0026rsquo;ve performed a full test to verify any assumptions will be vital. Many of the requirements here will be largely dictated by the organisation in question, its size and industry type, and no two projects or businesses are the same in this regard. Therefore, the solution architect may have to spend considerable time planning and implementing bespoke technical solutions to ensure we can maintain business continuity at all times.\nAzure Integration Options Microsoft Azure provides us with our most straightforward mechanism to extend out the capabilities of the Power Platform and should always be the first point of consideration for any solution architect. There are innumerable ways we can leverage Azure in this fashion, but, most typically, the following types of integrations will be ones that we would normally recommend as solution architects:\nAzure Synapse Link: Previously, if we wanted to export our Dataverse database data out into Azure continuously, we would have to look at options such as the Data Export service. Given now, however, that this service is deprecated, we can instead look to leverage Azure Synapse Link to export our data out into an Azure Data Lake Storage Gen2 location and then surface this within an Azure Synapse Analytics Workspace. This will then allow us to consume our Dataverse environment data as part of any \u0026ldquo;big data\u0026rdquo; processing solutions or simply get this data fed into our organisation\u0026rsquo;s existing data warehouses. Connectors: We have well over 350+ different connectors available for us as part of our canvas Power Apps, or Power Automate flows and, as part of this, we also have access to a variety of Azure-based services, such as Azure Blob Storage, Azure Key Vault, Azure Data Factory and more. It, therefore, becomes relatively trivial for citizen developers to extend their solutions into these services as and when the need arises. Dataverse Extensibility: Developers of Dataverse plug-ins can look to extend out plug-ins and execute them within Microsoft Azure by exporting the complete transaction details into Azure Service Bus or by calling an HTTP endpoint residing on an Azure Function or similar. This can be useful in circumnavigating some of the sandbox limitations relating to plug-in execution and as a mechanism to \u0026ldquo;feed\u0026rdquo; Dataverse data out into other external systems. Other topics also stray into this area, such as the preview capabilities that allow us to link our app/Dataverse consumption to an Azure subscription, to account for any overages or unpredictable usage patterns. Typically, preview features will not be topics for consideration as part of an exam. Still, it remains beneficial for a solution architect to be appraised of such capabilities so we\u0026rsquo;re adequately prepared for when they move out into general availability.\nDemos: Reviewing Microsoft Azure Extensibility Options To better understand what options are available to us, take a look at the following videos below, which demonstrate how to work with Azure Synapse Link and how we can export Dataverse transactions into an Azure Function or to Azure Service Bus:\nIf, as solution architects, we design our integrations well, it will ensure that we get the most value out of our Power Platform solution and increase its utility across the organisation. Next time in the series, we will look at how to design the most effective security model within the Power Platform.\n","date":"2022-01-23T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-designing-integrations-for-the-power-platform/","title":"Exam PL-600 Revision Notes: Designing Integrations for the Power Platform"},{"content":"Welcome to the fifth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Previously, we went into detail about how we can approach designing our Power Platform business application as part of our first look at the Architect a solution area of the exam. We now move onto the next area of the exam, Design the data model, which focuses on the following topics:\nDesign the data model\ndesign tables and columns design reference and configuration data design relationships and relationship behaviors determine when to connect to external data versus import data design data models to address complex sets of requirements Data forms the cornerstone of any solution leveraging the Power Platform. Whether we leverage Microsoft Dataverse as our data source or bring in others via the rich array of connectors available, the solution architect must carefully consider all aspects of our overall data model and how this fits within the platform. With this in mind, let\u0026rsquo;s jump in and look at the key elements we need to appreciate.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nTables and Columns Overview More often than not, our Power Platform solution will involve Dataverse. And, even though we may leverage objects within the Common Data Model (CDM), there will still be a requirement to tailor Dataverse to suit our particular purposes. As such, a solution architect needs to have an excellent grasp of building the most effective Dataverse model, using our detailed knowledge of how this service works. Understanding of the following Dataverse concepts will be essential as part of this:\nTypes of Tables: This includes knowing which of our tables are included \u0026ldquo;out of the box\u0026rdquo; as part of the CDM, the difference between standard versus activity tables and how table ownership settings affect how we leverage the table within our various applications. Column Data Types: A significant benefit of using Dataverse is that we can quickly select the most appropriate data type for the information we plan to store. These types conform to our general expected data types when working with SQL Server and .NET more generally. A solution architect should be well placed to elaborate on the various options available to us, including their limitations and the intricacies of their behaviour. For example, there is no excuse for not realising that our choice column types are stored as integers within the underlying database. Standard, Calculated \u0026amp; Rollup Columns: For most situations, a standard column type - one that typically accepts and receives a user-defined input - will be sufficient. For more complex requirements, the solution architect should appreciate the capabilities on offer as part of our calculated and rollup columns. We will leverage calculated columns to automate a semi-complex or straightforward calculation and persist this within the database. Rollup columns, in comparison, will almost always involve a situation where we need to aggregate numerous child rows from a parent row. It will be vital to know the differences between them, and details such as rollup column values need to be refreshed regularly (either by a user or a background system job). Global Choices: This component type will be most appropriate when we plan to leverage several choice column types across different tables, and we want to ensure that all of these have the same selection options. We can centralise this management into a single component type and achieve these aims by using a global choice. I could go on, but this represents the \u0026ldquo;core\u0026rdquo; elements that we must know at a minimum. With our detailed knowledge of Dataverse, we can then identify the existing CDM objects we plan to leverage and what bespoke development work will be required to fill in any gaps. As part of this, it may be prudent to look at mocking up our entire data model as part of a diagram\u0026hellip;but more on this subject later. 😀\nThe Importance of Reference \u0026amp; Configuration Data It\u0026rsquo;s possible that, as we configure our Dataverse environment, we create a variety of different table types that we then use to store data that integrates with our solution somewhat. For example, we could set up a lookup table instead of a choice column to list various options our users can select at the model-driven app form level. Or we could have plug-ins implemented that have their behaviour modified based on configuration data we define in the system. Regardless of the scenario, we need to ensure, as solution architects, that we have not only modelled these table types out correctly but also considered how this data gets applied to our different environments. We might often reference these table types via components such as classic workflows or business rules. This means we start to have dependencies in our environments on row(s) with a specific GUID value; simply creating these rows in different environments won\u0026rsquo;t be sufficient. Therefore, an alternate approach will be needed.\nTo easily migrate across this data into our downstream environments, solution architects should encourage the team to implement Configuration Migration profiles/packages and, where possible, incorporate this into any Azure DevOps deployment pipelines via the dedicated task(s) available to us. For more complex scenarios, which might involve transforming data from external systems, we can consider leveraging data flows or Azure Data Factory and, in particular, mapping data flows to meet the requirement. Automation is critical throughout all of this, and we should always apply reference/configuration data during any solution deployment we perform.\nUnderstanding Dataverse Relationships Hopefully, if we have studied and passed PL-200 or PL-400, this is a topic that we should have an excellent appreciation of. But let\u0026rsquo;s spend a few moments just recapping over the fundamentals. 😉 Relationships provide us with the primary mechanisms to model our Dataverse tables effectively. They also ensure that our business objects are well represented within Dataverse by mandating, for example, that a single Account can have many Contacts. There are two types of relationships we can create in Dataverse:\nOne to many (1:N) / Many to one (N:1): Don\u0026rsquo;t get confused by the different phraseology, as all we\u0026rsquo;re describing here is the relationship\u0026rsquo;s direction. The base functionality of having a lookup column on our N table to represent our 1 row is precisely the same. Many to many (N:N): In this scenario, we set up an intersect table to record each instance of the N:N relationship. There will be specific scenarios where this relationship type will be most prudent. For example, when we want to record which Contact records have attended an Event - many potential Events could have one, several, or zero Contacts. As well as knowing these important theoretical details of how relationships, it\u0026rsquo;s expected that a solution architect can go further and understand elements such as:\nBehaviours and Actions: These will control what occurs when we perform a specific action against a parent row. For example, if we delete an Account row, do we also want to delete all Contacts linked to it? Or is it better to preserve these as \u0026ldquo;orphan\u0026rdquo; rows in the system? Considering the different scenarios and recommending the most appropriate behaviour type will be crucial for a solution architect. Manual vs. Native N:N Relationships: Most commonly, we can look to implement our N:N relationships natively by allowing Dataverse to create and manage the background intersect table for us. However, this can sometimes present challenges from an integration standpoint, as it\u0026rsquo;s notoriously difficult to target SDK operations against this hidden table type. In addition, this table cannot be customised, meaning we miss a valuable opportunity to decorate this table with additional metadata properties. When these become concerns, we can instead create a manual N:N relationship by making the intersect table ourselves and configuring the appropriate 1:N relationships against this. Hierarchical Relationships: It\u0026rsquo;s possible to configure 1:N relationships to be self-referential on the current table. This could be useful when we need to, for example, model and display our table data as part of a graphical hierarchy within a model-driven Power App. As solution architects, we should know how to configure and set this up on any table. Connection Roles: Often overlooked, connection roles are technically a type of relationship; as such, making sure we understand how to set these up will be essential. From a usability standpoint, there are most appropriate for situations where we cannot anticipate how objects will need to be related to each other, but where we still require a flexible mechanism of making these links in the system. The genuine expectation throughout all of this is that we\u0026rsquo;ve \u0026ldquo;been there, done that\u0026rdquo; via previous projects; therefore, in any new projects that we are part of, we can elaborate and guide the project team on things to watch out for.\nTo Import or Extend? How to Deal with External Data Once we\u0026rsquo;ve established what data we plan to work with, we can decide the best route to bring them into the fold of the Power Platform. The options available to us will be primarily dictated by whether we want to leverage Dataverse or not, and, as the solution architect, we need to align ourselves towards the best tool based on the data we are consuming. Typically, our choice will fall to one of the following four options:\nData Import Wizard: Provided we have a model-driven Power App setup that exposes out the table we want to work with, we can very easily import data into Dataverse and for that particular table at any time. As part of this, we can either use a previously exported template or look to import data in CSV / XML format. This option is best if we plan to migrate fully into Dataverse or import data already in the required structure to map into Dataverse easily. Data Flows: We alluded to this option earlier, and this provides a more enhanced experience than the data import wizard. This option is particularly beneficial if we need to manipulate our data before importing it into Dataverse. We can leverage the full capabilities on offer as part of Power Query when using data flows. In addition, data flows support the ability to load data one-time or continually perform loads/synchronisation from our external data source, making them invaluable if we need to perform iterative loads of data. Connectors: This option will be most appropriate if we plan to build out a Power Automate cloud flow or canvas Power App. Using one of the 350+ connectors available to us, it becomes effortless to bring our external data into these systems. And, if it\u0026rsquo;s a case that can\u0026rsquo;t find a connector to suit our scenario, we can look to create our own with relative ease. Virtual Tables: For situations where we want to surface data from external systems, so they behave exactly like a standard Dataverse table, we can implement Virtual Tables to achieve this requirement. What\u0026rsquo;s even better is that records we pull in via Virtual Tables can support full Create, Read, Update and Delete (CRUD) permissions. The only major downside with our virtual tables is that we will almost always have to get a .NET developer involved to build them out. This is because the custom data provider they rely on can only be authored via a .NET plug-in. Demo: Working with External Data Sources in the Power Platform To better understand each of the above options in detail, check out the video below, where I dive into each option:\nFail to Plan, Plan to Fail: The Importance of Data Model Diagrams Throughout any IT project, the temptation to get stuck and start building a solution can be incredibly tempting. This temptation is more significant than ever when it comes to the Power Platform, as there are minimal barriers for us to go in and start building our solution. Notwithstanding all this, it remains imperative to fully document and create a diagram of the data model we plan to use within the Power Platform. This diagram can represent the envisioned structure that we hope to build out using Dataverse or represent the external data sources that we plan to bring into our solution. The advantages of doing this cannot be understated:\nA diagram can be leveraged to represent the business objects we plan to use, the appropriate attributes to record and the different relationships between objects, to help validate with our stakeholders that we are building a solution that will be fit for purpose. The diagram can indicate which components we plan to leverage that may already exist in Dataverse and which will need to be customised bespoke. Typically, the actual work of building out our data model will be handled by someone else on the project team. By building a precise diagram that communicates how our data model needs to look, the solution architect can ensure that this work can continue and we can minimise the risk of mistakes or misunderstandings. Any resulting diagram we create can later be included as part of any formal documentation for our solution, such as for a Low-Level Design (LLD) document. A standard format we can adhere to here is an Entity Relationship Diagram or, specifically, a Crow\u0026rsquo;s Foot Notation Diagram. There are various tools we can use to build this out, but given this is a Microsoft-focused technology blog, it would be remiss of me not to advocate using Microsoft Visio. 😏\nDemo: Building Crow\u0026rsquo;s Foot Notation Diagrams using Microsoft Visio In this video, I demonstrate how we can use Microsoft Visio to build out a Crow\u0026rsquo;s Foot Notation diagram that we can then use as the basis of any future data modelling activities:\nOne of the all-time boons of the Power Platform is that it\u0026rsquo;s pretty agnostic when it comes to our business data; therefore, we as Solution Architects can leverage to deliver a product that is fit for the business, not the other way around. Next time in the series, we\u0026rsquo;ll be moving on to look at integrations and how we can approach designing them across all aspects of the Power Platform.\n","date":"2022-01-16T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-designing-a-data-model-for-the-power-platform/","title":"Exam PL-600 Revision Notes: Designing a Data Model for the Power Platform"},{"content":"The partner consultancy I run, SOLO Cloud Solutions, uses Dynamics 365 Customer Engagement as our internal CRM system to manage potential sales opportunities and customer caseloads. I\u0026rsquo;m a firm believer in \u0026ldquo;eating your own dog food\u0026rdquo; when it comes to the technology solutions our organisation supports and, by ensuring we use the very same tools we are proponents of to our end customers, we can best understand the benefits and, sometimes, frustrations that come with these systems and, all being well, appear credible to our end customers when recommending a particular solution to adopt. With this in mind, our business can benefit significantly on the one hand - by leveraging features such as Entitlements or SLA\u0026rsquo;s to ensure we can keep track and respond quickly to our customer\u0026rsquo;s needs - and, on occasion, grapple with issues and frustrations on the other hand. This reality doesn\u0026rsquo;t distract from my real favour for Dynamics 365 as a business solution; it\u0026rsquo;s just a natural expectation of any software system a business chooses to adopt. 😉\nTo illustrate this point more plainly, let me share a real frustration we faced involving the system and how we got around it. For some of our customers, we track the number of hours we consume on each Case via an Entitlement. These Cases can cover two types of issues:\nBreak/Fix where the solution isn\u0026rsquo;t working due to a fault on our side or via Microsoft. Our support agents will typically pick these up and work through to resolution. Enhancement or change requests requiring a modification to an existing solution. In all cases, these will be passed along to an appropriate consultant (maybe even me!) to progress and implement accordingly. Under the support agreements, we do not typically deduct/consume hours for issues where the system has broken. As we have dedicated colleagues to handle support issues, this is \u0026ldquo;soaked up\u0026rdquo; as part of our support agreement. We only then need to potentially charge, track and (potentially) invoice the customer for work that sits outside of this and, typically, requires the expertise of a trained consultant to satisfy. So, for the latter scenario, colleagues logging Cases in the system need to press the appropriate Do Not Decrement Entitlement button to ensure hours are not deducted from the Entitlement:\nSo far, so good - all of this is native functionality in the application and an absolute doddle to set up. Things start getting a bit trickier when we need to report back to customers on how we\u0026rsquo;ve consumed hours for both scenarios. Scenario 2) presents little difficulty in obtaining, but 1) proves, as far as I know (answers on a postcard if you disagree), more irksome generate. However, as they say, where there is a will, there\u0026rsquo;s a way. 😁 Using a bit of FetchXML wizardry, we can look to construct a query similar to the one below to generate the information we need for both scenarios, with a clear flag field to indicate whether the hours fit into scenario 1) or 2):\n\u0026lt;fetch aggregate=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;incidentresolution\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;totaltimespent\u0026#34; alias=\u0026#34;incidentresolutionsum\u0026#34; aggregate=\u0026#34;max\u0026#34; /\u0026gt; \u0026lt;link-entity name=\u0026#34;activitypointer\u0026#34; from=\u0026#34;activityid\u0026#34; to=\u0026#34;activityid\u0026#34; link-type=\u0026#34;inner\u0026#34;\u0026gt; \u0026lt;link-entity name=\u0026#34;incident\u0026#34; from=\u0026#34;incidentid\u0026#34; to=\u0026#34;regardingobjectid\u0026#34; link-type=\u0026#34;inner\u0026#34; alias=\u0026#34;incident\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;incidentid\u0026#34; alias=\u0026#34;incidentid\u0026#34; groupby=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;decremententitlementterm\u0026#34; alias=\u0026#34;decremententitlementterm\u0026#34; groupby=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;filter\u0026gt; \u0026lt;condition attribute=\u0026#34;entitlementid\u0026#34; operator=\u0026#34;eq\u0026#34; value=\u0026#34;094b0dcc-df24-ec11-b6e6-000d3a0cb337\u0026#34;\u0026gt; \u0026lt;value\u0026gt;094b0dcc-df24-ec11-b6e6-000d3a0cb337\u0026lt;/value\u0026gt; \u0026lt;/condition\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/link-entity\u0026gt; \u0026lt;/link-entity\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; Using arguably the best XrmToolBox app, we can then get a result back that looks like this:\nFrom there, we can then sum up all of the True values to determine the hours consumed for scenario 2); everything else will be a break/fix issue.\nNow, there\u0026rsquo;s a couple of things to remember with this solution:\nYou will need to update the value used in the entitlementid condition to match your environment and Entitlement\u0026hellip;in case that wasn\u0026rsquo;t obvious. 😀 It\u0026rsquo;s expected we always associate a Case to an appropriate Entitlement for the customer. This has the benefit of ensuring that we can always tag the correct SLA. For reporting purposes, this also provides a common attribute for us to filter on for each period we are concerned with (for us, we typically have an Entitlement per calendar quarter for each customer). Our primary table used is the incidentresolution table, which indicates we\u0026rsquo;ve resolved the Case. We then implement grouping and aggregation to return the maximum totaltimespent value from this, representing the actual amount of time spent on the Case and accounting for any scenarios where the Case may have been re-opened. For this approach to work, we instruct our agents to raise and close Tasks, via the Timeline control, on the Case row for all actions they perform, recording the amount of time it takes to complete each one in the process. This ensures that the time rolls up correctly when the Case is closed. But apart from that, this solution works well for our purposes and makes it quick and easy to generate the information we need during our quarterly reviews with customers. As stated, it\u0026rsquo;s a shame that we have to revert to a more complex query to get this information, but I\u0026rsquo;m willing to admit that our particular scenario may be somewhat niche. Hopefully, if you\u0026rsquo;ve found yourself in the same boat from a requirements standpoint, then this approach will help you as well. And it goes a long way to proving the point over the potential versatility of the Dynamics 365 Customer Service application. Provided, of course, you know what you\u0026rsquo;re doing\u0026hellip; 😅\n","date":"2022-01-09T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/obtaining-time-spent-hours-on-cases-for-an-entitlement-dynamics-365-customer-service/","title":"Obtaining Time Spent Hours on Cases for an Entitlement (Dynamics 365 Customer Service)"},{"content":"Welcome (somewhat belatedly) to the fourth post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Last time in the series, we evaluated mechanisms to capture requirements and perform fit/gap analysis for our Power Platform solution. Today, we\u0026rsquo;ll be moving on to our next exam area and the first topic within there, Architect a solution. This area has the highest weighting across the whole exam, a whopping 40-45%, and as part of our first topic, Lead the design process, Microsoft expects us to demonstrate knowledge of the following:\nLead the design process\ndesign the solution topology design customizations for existing apps design and validate user experience prototypes identify opportunities for component reuse communicate system design visually design application lifecycle management (ALM) processes design a data migration strategy design apps by grouping required features based on role or task design a data visualization strategy design an automation strategy that uses Power Automate As alluded to previously in the series, a solution architect is our Subject Matter Expert (SME) when it comes to the Power Platform, with the natural anticipation that we\u0026rsquo;ve grasped many seemingly unrelated but vital concepts that will have a bearing on our end product. Let\u0026rsquo;s jump into this first area to elaborate on the themes Microsoft expects us to know for the exam.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nDetermining a Solution Topology Solutions form the cornerstone of any successful Power Platform project. They will allow us to introduce important Application Lifecycle Management (ALM) concepts to our project and act as a mechanism to quickly and straightforwardly refer back to the specific features our application(s) rely on to work successfully. The solution architect on the project will need to decide how many solution(s) will need to be set up to best fulfil the requirements and support any future growth aspirations for our business solution. As part of this, some of the following considerations come into play:\nManaged vs Unmanaged: We need to appreciate fully the differences between both solution types, the most appropriate place to use each one, and the implications that layering (more on this shortly) can have on our customizations. If you haven\u0026rsquo;t already, make sure you brush up on this Microsoft Docs article to learn more. Solution Publisher: All customizations performed against Microsoft Dataverse will be impacted by whichever publisher we\u0026rsquo;ve chosen for our solution. We should always create our own bespoke publisher that reflects the organisation that the solution belongs to. Layering: As multiple managed solutions and unmanaged customizations get applied to Microsoft Dataverse, the result of what the user ultimately sees will potentially change. As a general rule of thumb, managed solutions will typically operate on a \u0026ldquo;last one wins\u0026rdquo; basis, and unmanaged customizations will always take precedence. Microsoft expects us to fully understand the impact layering can have on our system and on how the platform merges particular managed layers to resolve potential conflicts.. Dependencies: As we introduce more solutions into our environment, the risk of dependency issues causing them to fail to import cleanly increases exponentially. Being able to mitigate and identify how to remove dependencies will be reasonably atypical in the day-to-day work of a solution architect. Segmentation: Proper segmentation of our solutions can significantly reduce the time it takes to import customizations and avoids the risk of any unplanned customizations getting deployed out early. Understanding how to work with each of the different segmentation options for our solutions will be vital for any solution architect. Despite the potential mechanisms open to us, there is a lot to be said for KISS - Keep It Simple, Stupid. 😉 Smaller projects can typically benefit from having only a single solution throughout their lifecycle. Things can get unnecessarily complex as we introduce more solutions into the equation. Think carefully about the current and potential future direction of what we\u0026rsquo;re trying to achieve with the Power Platform. Use this to inform on the most optimal and straightforward approach to adopt. Microsoft has published an excellent article outlining the benefits of a single versus multiple solutions and the dangers of layering and dependencies.\nCustomizing Existing Dynamics 365 Applications As we saw in our previous post, the importance of understanding - and potentially leveraging - one or all of the different Dynamics 365 Customer Engagement (CE) applications remains a high possibility for any Power Platform project. As part of doing this, we should ensure that we\u0026rsquo;ve implemented the basics first from a solution standpoint by setting up a solution and a publisher prefix to indicate all bespoke customizations we plan to perform. From there, we should consider the following factors:\nMicrosoft releases up to two extensive updates to each of the Dynamics 365 CE applications every year, sometimes resulting in additional or unwanted functionality being automatically pushed out into our environments. If we customize on top of any of these applications, awareness and contingency planning for handling these updates will be essential. For specific scenarios, it may be preferable to create new customization types within Dataverse instead of customizing on top of any existing Dynamics 365 CE functionality. For example, consider taking a copy of the Lead table Main form, customizing this and exposing this out instead within your application. We can also adopt the same approach when working with views, business process flows and model-driven apps. What is the likelihood of other solutions or third-party add-ons also leveraging the same set of components? Considering what we know about layering, it could be that our planned customizations will not surface correctly based on this. Take care to understand the entire landscape and evaluate the impact of additional, external solutions alongside any customizations we plan to perform. The role of the solution architect here is to decide on the most appropriate direction of travel and coach members of the team on following the correct process.\nDemo: Customizing Existing Dynamics 365 Applications In the following video, I demonstrate some of the approaches we can adopt when planning to customize the existing Dynamics 365 Sales application:\nApplication Lifecyle Management (ALM) Considerations \u0026amp; Concepts A healthy Power Platform solution is one that we can deploy quickly, efficiently, and with minimal human interference. We\u0026rsquo;ve all been there for those long Friday evenings, trying for the umpteenth time to get our updates deployed out to production. 😫 We can avoid a lot of this pain by ensuring we\u0026rsquo;ve considered all aspects of our ALM process. The solution architect will need to take the lead in recommending and guiding the project team towards implementing the correct tools to ensure that those frustrating late-night sessions on Friday remain a thing of the past. 😉 To summarise, an appreciation of the following topics will be essential, not only for the exam but in our day-to-day work with the Power Platform too:\nSolutions: We\u0026rsquo;ve already talked in-depth about solutions\u0026rsquo; importance and the fundamental considerations. In short, they provide the central cornerstone of any ALM strategy pertaining to the Power Platform. Environments: Organisations wanting to ensure appropriate quality assurance (QA) stages and complete testing of components before launching them to users will need to consider implementing several different environments. The solution architect will need to consider carefully the number, region, and related deployment considerations for all environments that the organisation plans to implement for their project. I would argue that environments are essential for businesses of any size globally. At a bare minimum, all Power Platform deployments should have at least one Sandbox environment setup for development/testing purposes. Azure DevOps: One of the significant risks associated with an IT deployment comes down to human error. We are prone to make mistakes through no fault of our own, which can be costly to the business, our reputation, and our end customers. For these reasons, adopting Azure DevOps becomes an essential consideration for any solution architect and assits us in automating all aspects of our software deployments. Using DevOps and the associated Power Platform Build Tools, we can look to do things such as: Automatically extract out the contents of our solutions into a Git repository. Enforce automatic QA of our solutions by automatically calling the solution checker when checking in code changes. Create, delete, copy and manage our environments via a build or deployment pipeline. Deploy out a solution automatically into one or multiple different environments. Configuration Migration Tool: As we configure our Dataverse environment, we potentially create a variety of configuration-based data that we may need to migrate across to our downstream environments. This is particularly the case if we leverage one of the Dynamics 365 Customer Engagement applications and use features like Queues, Business Units or Subjects. To straightforwardly migrate this data between environments and, most crucially, ensure GUID lookups are preserved, we can use the Configuration Migration Tool to define our export schema, export data and import it into any Dataverse environment. We can carry out migrations manually, via the dedicated SDK tool available on Nuget, or automatically - either by using Azure DevOps or through PowerShell. Package Deployer: For scenarios where we plan to deploy multiple solutions and any corresponding reference data generated by the Configuration Migration Tool, we can use the Package Deployer tool to build out a complete, end-to-end installation package to ensure we install everything as part of a single action. Package Deployer is used extensively by third-party developers / ISV\u0026rsquo;s, and there will be applicable scenarios for internal projects where using it may realise benefits. These examples only scratch the surface of the things we need to consider. The topic is so weighty that an entire set of Microsoft Docs articles are devoted to the subject. For a better understanding of the practical steps involved in building out an ALM solution leveraging Azure DevOps, check out this post from my previous PL-400 blog series. You can also check out the following video that was recorded for this series, that demonstrates how to build this out:\nThe developers within our organisation will typically build out the core components to satisfy our ALM strategy, so a high-level understanding and overview are more than sufficient for a solution architect\u0026rsquo;s purpose.\nThe Importance of Prototyping IT projects are fraught with risk from the outset, and one of the biggest, ever troublesome, dangers can relate to requirements not aligning to the finished product. This emphasises the importance of the project team and, crucially, the solution architect validating that we are building a solution that will be \u0026ldquo;fit for purpose.\u0026rdquo; We can turn to a tried and tested approach to get this validation as early as possible by building a prototype of our solution. The prototype should have the \u0026ldquo;look and feel\u0026rdquo; of our end product and is effectively used as a benchmark to a) validate that the base requirement can be met in a minimal sense and b) confirm that we\u0026rsquo;ve been able to translate across the requirement into a workable solution. The critical thing to remember is that a prototype is\u0026hellip;well\u0026hellip; a prototype! It won\u0026rsquo;t necessarily meet all business, technical, security, or regulatory requirements, and we should take care to communicate this to our stakeholders when they review our prototype. The good thing about prototyping is, particularly when it comes to the Power Platform, we can accomplish this in a very rapid fashion. For example, we can do things such as:\nQuickly build out model-driven / canvas applications, implementing simple navigation and styling functionality where appropriate. Build out an automation between the system(s) we plan to integrate with, validating that we can make connections and that data flows at the most appropriate trigger points. Mock out our proposed data structures within Microsoft Dataverse Create a simple dashboard using Power BI to indicate the insights our solution can provide once fully built out. Once your stakeholders have reviewed your prototype, you can decide on the most appropriate course of action. If everything has gone well, we can start converting our prototype into an actual, fully working solution, providing an accurate estimate of the amount of time it will take to do this in the process. We might decide that the prototype will meet the requirements after making some essential alterations; we should then decide what level of investment is made on addressing and then presenting the prototype again. Finally - our less than ideal outcome - we could not move forward whatsoever. Maybe the prototype cannot handle the business need, or we need an alternative approach instead. Perhaps the prototype cannot address the business need, or an alternate method is needed instead. The key objective for the project team at this stage is to try and salvage any efforts/investments made into the product, either by converting them into something usable or by ensuring the appropriate lessons are fed back into the organisation. The earlier, the better to make this type of call; it may become too expensive and impossible to make this decision later on down the road.\nStrategising Data Migration \u0026amp; Visualization Approach Data Migration is a consideration that solution architects need to consider if we plan to bring data fully into the Power Platform, typically as part of Microsoft Dataverse. Suppose we plan to use an existing data source, such as SQL Server or SalesForce. In that case, it\u0026rsquo;s highly likely instead that we can use many of the out-of-box connectors available to us, introducing the on-premise data gateway for when we need to securely expose out any internally hosted resources into the Power Platform. We\u0026rsquo;ll need to consider one or several different solutions for all other situations, depending on the complexity of the data we are migrating. For simple migrations, we could leverage the data import wizard. When our data requires cleansing, it will be necessary for us to consider instead tools like dataflows or Azure Data Factory, to Extract, Transform and Load (ETL) our data. The solution architect will typically need to spend time assessing the format of data needing to be migrated into the Power Platform and, from there, provide a suitable recommendation on the best tool to leverage.\nWhen it comes to building the most effective visualization solution, we should immediately draw our attention towards using Power BI as a first preference tool. Not only can it work well with Microsoft Dataverse, but it also has a wide variety of different connectors available, thereby allowing us to bring together data from disparate systems into an easy to consume, visually interactive report or dashboard. Power BI will also represent our best choice if a degree of data preparation, cleansing, and enhancement has to take place, tasks that we can satisfactorily handle via Power Query or a Data Analysis Expression (DAX) function. Beyond this, we can also consider some additional tools, such as charts, dashboards, Excel Templates and reports. All of these will be best suited for situations where all of our business data resides within Dataverse or if there are licensing / cost implications in place that prevent us from adding Power BI into the equation.\nBuilding Effective Role-Based / Task-Based Applications A good application will allow users to carry out the necessary steps for their job role or current task; a well-built application will go a step further by enabling users to complete what they need to in the most straightforward manner possible. For this reason, we must plan to build our applications around a focused experience that aligns towards a specific role or task that users/colleagues need to complete frequently. The temptation to build a single monolith application is ever-present, and we should attempt to avoid this where possible. Although a single application may reduce our management overhead, it could potentially lead to end-user confusion and severe usability issues in the long term. Again, the concept of KISS is very much relevant here. 😉 As solution architects, we should guide the business towards building out applications that fulfil this objective. To assist you further, the list below provides some benefits, disadvantages, and examples of each approach, which can help you in determining the best option to choose:\nRole-Based Applications Benefits: Ensures that our app can be leveraged by a broad group of user types/personas within the organisation. Disadvantages: Can limit the scope of the application and make it difficult to determine the end-result / output. Examples: Case management application for all internal support agents. App for Project Managers to review and update all projects under their control. Application for stock managers to use to manage their warehouse(s). Task-Based Benefits: Allows us to model out our application based on a standard, repeatable series of steps that our organisation carries out frequently Disadvantages: Can be challenging to build out for a complex task or one that involves several different dependencies. Examples: Time entry application for all internal colleagues in the organisation to link their time back to projects worked on. App that registers new site visitors. Home visit inspection app that allows for a set of steps to be completed when in customers home. How we go about building the application (i.e., which type of Power App will be the most appropriate) will be a separate conversation. Still, with this fundamental decision made, we can begin in earnest.\nHow Best to Leverage Power Automate? When best considering how to introduce the benefits of Power Automate, the solution architect needs first to have a good appreciation of the three different \u0026ldquo;pillars\u0026rdquo; or features within this service:\nCloud Flows: These types of flows will be best for most automations we plan to build out and are most suitable for when the application systems we are working with have existing connectors or APIs available to connect to. Developers can build out simple flows to improve their productivity or create more complex flows that integrate systems together. We can trigger cloud flows based on specific events that arise in our systems (e.g., on Create of a Dataverse row), on a pre-defined schedule or even manually. Business Process Flows (BPF): Whereas cloud flows are designed for cross-application automations, these types of Flows are best for enforcing data integrity within our particular Dataverse environment. By building out a straightforward BPF, we can guide users of our model-driven apps or, indeed, any user on the Power Automate portal towards submitting the correct data at the right stage in our particular journey. We can then extend this by calling cloud flows when certain conditions are met as part of our process and implement more complex functionality, such as conditional branching or custom control integration. Robotic Process Automation (RPA): Also referred to as desktop flows, we would typically leverage these types of flow for our trickiest automations, typically involving legacy application systems with no entry point from a database / API standpoint. RPA flows will also be most appropriate for any automation that spans multiple, complex steps across different systems and for when the automation may need to be attended to while it\u0026rsquo;s running. RPA flow builders will use the Power Automate Desktop application to build, test and execute their flows, with options to then execute them either in attended or unattended mode. Solution architects are expected to align a particular automation requirement to the most appropriate Power Automate feature set based on the base requirements and the technical environment involved.\nDemo: Evaluating Power Automate Capabilities To help better demonstrate the capabilities available across Power Automate, take a look at the below video where I provide a demonstration on each of the core capabilities on offer:\nWe\u0026rsquo;ve covered a lot in today\u0026rsquo;s post. 😅 But this reflects the importance of getting a lot of the basics ticked off early on during our Power Platform project. Next time in the series, we\u0026rsquo;ll jump down a gear and focus on designing a data model using Microsoft Dataverse.\n","date":"2022-01-02T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-designing-a-power-platform-solution/","title":"Exam PL-600 Revision Notes: Designing a Power Platform Solution"},{"content":"Reports are one of those familiar yet fiddly and always inconvenient things that those working with any of the Dynamics 365 Customer Engagement applications will need to contend with regularly. Typically, as sod\u0026rsquo;s law always dictates, it will be the case that we end up spending more time getting our machines set up to work with them, as opposed to allocating any time itself towards creating or modifying an existing report. 😫 This happened to me recently, as I realised I\u0026rsquo;d changed machines twice since I last needed to work with Reports. Given that such a vast gap existed, I had to re-familiarise myself with the core details. From the back of my mind, I recalled the following:\nSince Reports use SQL Server Reporting Services (SSRS) technology under the hood, a valid version of SQL Server Data Tools (SSDT) is required, either as a standalone or \u0026ldquo;bolted on\u0026rdquo; portion of your Visual Studio installation. As far as I remembered, the minimum version of Visual Studio supported was 2015, which meant having to go through the hassle of installing all of this onto my machine. Not great. 🙁👎 To unlock the required FetchXML connector option, installing the Dynamics 365 Report Authoring Extension add-on was also necessary. The Extension formed part of the wider SDK and could be easily downloaded/installed onto any Windows machine. So not too much of a problem there. Understandably, since there had been such a considerable time gap since I last worked through this, a natural question emerged at the back of my mind. No prizes for guessing what this question is, based on the title of today\u0026rsquo;s blog post. 😁 I thought this would be an excellent topic to discuss further, so let\u0026rsquo;s dive in and see what\u0026rsquo;s possible now regarding the Report Authoring Extensions and the Dynamics 365 Customer Engagement apps.\nFirst, the TL;DR version\u0026hellip; The maximum version currently supported is Visual Studio 2019. Support for Visual Studio 2022 is currently impossible, as the required Microsoft Reporting Services Projects extension doesn\u0026rsquo;t yet work with this version of Visual Studio. Expect this to be addressed in the fullness of time.\nReviewing the Latest Version of Dynamics 365 Report Authoring Extensions I had the relatively low expectation (shame on me!) that there had not been an update to the above application for many years. However, I was pleasantly surprised to find that the extensions were last updated earlier this year, along with the following notes of interest:\nExcellent news! This means that we can use, at the very minimum, Visual Studio 2019 to build out our reports instead of spending effort installing previous versions of Visual Studio that we no longer use elsewhere. I also assume that Microsoft has also updated the tooling to ensure full support for modern authentication prompts and avoid any nasty errors relating to the TLS 1.2 enforcement for all connections.\nSQL Server Data Tools RIP? As mentioned earlier, the previous experience typically involved installing the separate SSDT add-on for your required Visual Studio version. This would give you the ability to start creating and working with SSRS Report projects and allow you to build out other items, such as SQL Server Integration Services (SSIS) DTSX packages. SSDT was typically included as an installable workload or as a separate download; now, as noted in the following article, we must add the workload during the standard Visual Studio installation. Then, for 2019 onwards, we must then finalise our setup by installing the three separate extensions based on the workloads we plan to develop for:\nMicrosoft Analysis Services Projects SQL Server Integration Services Projects Microsoft Reporting Services Projects Although, as we can see, SSDT is still a valid workload we can select when installing, the new process feels like a bit of a death knell for the traditional SSDT, as I remember it. While the new way is more efficient, insofar as it allows us to download only what we need, it could add several additional steps to your installation process. Therefore, be sure to download all of the extensions you\u0026rsquo;ll need.\nGetting Visual Studio 2019 SSRS Ready So we\u0026rsquo;ve established that we can use Visual Studio 2019 alongside Report Authoring Extensions and familiarised ourselves with some of the changes with SSDT. So our outline process for getting everything set up involves the following steps:\nInstall Visual Studio 2019 Community, Professional or Enterprise, ensuring that we\u0026rsquo;ve selected the appropriate workload for SQL Server Data Tools, underneath the Data Storage and processing heading: Install the latest version of the Report Authoring Extensions tool, using the link above Install the Microsoft Reporting Services Projects Marketplace extension, either by using the URL above or by selecting the Extensions -\u0026gt; Manage Extensions option with Visual Studio and selecting it from the list that appears: From there, we are good to go. We can then proceed to set up a quick test project and data source to verify that we can see the appropriate option that will allow us to connect up to and execute our FetchXML based queries against our Dynamics 365 Online deployment:\nWhat About Visual Studio 2022 Support? At the time of writing this post, those developers who have made a straight beeline towards the latest version of Visual Studio may be disappointed to hear that it\u0026rsquo;s currently impossible to build or work with CRM Reports or, indeed, any SSRS report via this version of the IDE. The primary reason for this is that the very same Extension mentioned above is currently not supported at all for Visual Studio 2022:\nI\u0026rsquo;ve seen some \u0026ldquo;hacks\u0026rdquo; in the past where people have been able to get legacy Visual Studio extensions working with a newer version of the tool. If you are feeling foolhardy desperate, you can explore this option further. Otherwise, we can only hope that there will be an update later next year to unlock this capability. Whether we would need to install an updated version of the Report Authoring Extensions remains unclear. Let\u0026rsquo;s find out together next year (hopefully). 😀\nConclusions or Wot I Think It\u0026rsquo;s always really annoying, as part of any IT project, when you need to spend many fruitless hours installing legacy or outdated tooling just so that you can deal with a specific request or issue. Safety is almost always not guaranteed, particularly if you find yourselves grappling with OS-related problems or other issues that end up eating into your entire workday. It\u0026rsquo;s good to know that the tooling for our Dynamics 365 Reports has kept up to pace with recent developments, and, in this case, we can use relatively modern IDE\u0026rsquo;s to build and work with Reports. Despite us having other, perhaps more attractive alternatives, such as Word Templates or even Paginated Reports, Dynamics 365 Reports remain a fundamental and widely used feature across many different application deployments; this is unlikely to change in the future. With modern tooling support, we can continue working with this vital feature and focus on addressing the task at hand rather than installing yet another legacy Visual Studio application onto our machines. 😉\n","date":"2021-12-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/can-we-build-dynamics-365-customer-engagement-reports-with-the-latest-version-of-visual-studio/","title":"Can We Build Dynamics 365 Customer Engagement Reports With The Latest Version Of Visual Studio?"},{"content":"You can do some pretty powerful things when using Power Automate cloud flows. But, as a client recently pointed out to me, sometimes it can be less than intuitive when dealing with a more challenging requirement. Part of the problem stems from the fact that Microsoft has based cloud flows off the Logic Apps enterprise workflow integration platform. To complicate things further, it will always be necessary to crack open a couple of different Workflow Definition Language (WDL) formulas to start getting your flows doing more advanced stuff. Therefore, expecting atypical citizen developers to gel with these concepts can sometimes be a big ask. However, there will be very suitable scenarios where a cloud flow will be your best candidate to consider, regardless of whether you are a citizen, fusion or \u0026ldquo;pro-code\u0026rdquo; developer, as it will be impossible to address a particular requirement otherwise.\nFor example, let\u0026rsquo;s assume we want to send out an email notification whenever a new Account row is entered into our Microsoft Dataverse environment. As part of this, we want to ensure the users receiving the email can quickly navigate to the new row by clicking a hyperlink in the email itself, which must always dynamically adjust itself each time an email is sent. We can look to start building this out via a straightforward Dataverse trigger and then a single Action step using the Outlook connector, as indicated below:\nFrom there, we can add in a hyperlink that, for now, will be a hardcoded link to a specific, existing Account row. We can get this URL by simply copying it from the URL bar in any model-driven Power App, and, in this case, we do a specific modification to it to remove the App ID query parameter. We do this to ensure that the URL will work, regardless of which app(s) a user has access to. The resulting URL will look like this before we add it into our Flow\u0026rsquo;s action step:\nhttps://mycrm.crm11.dynamics.com/main.aspx?pagetype=entityrecord\u0026etn=account\u0026id=97552f49-b160-ec11-8f8f-000d3ad55a5c\nHere\u0026rsquo;s where things get a bit\u0026hellip;fiddly. We now need to switch to the HTML view by pressing the appropriate button on the top right of the editor:\nFrom here, we now get to view the raw HTML and can see the appropriate \u0026lt;a\u0026gt; tag generated by the editor:\nFrom within here, we can now delete the ID portion of the URL and replace it with the ID from our trigger action, using the Dynamic content pane to assist us further:\nNow, we can always ensure the URL link will take us to the newly created Account row with this added on. Note, however, as part of this, we no longer have the option to return to the previous editor experience. Therefore, if we are uncomfortable working with HTML, we should ensure we have applied all styling elements to our email before we look to complete these steps:\nWe can now give this a test to verify everything works - the resulting email will look a little like this and returns us the newly created Account row each time it is generated, as a clickable hyperlink:\nSo this solution works well, but it is not remarkably versatile if we plan to move our Flows into other environments and use a different environment URL instead. To get around this, we can use a convenient trick that Thomas \u0026ldquo;The CRM Keeper\u0026rdquo; Sandsør has demonstrated previously, to add on an additional action step to return us the newly created Account row (along with the property we need) and, with a bit of WDL tinkering, we can then extract out the hostname of the current environment URL as a dynamic property:\nHere\u0026rsquo;s the full formula used:\n@{uriHost(outputs(\u0026#39;Get_new_Account_row\u0026#39;)?[\u0026#39;body/@odata.id\u0026#39;])} Now, we can ensure the URL is always correct, regardless of where the cloud flow has been deployed out - VERY nice! 😁\nAs we can see, the capabilities of cloud flows can be well suited for requirements like this and, with a bit of help from our community friends, we can put together a viable solution quickly and - crucially - minimise the amount of custom code we need to write. You can use the approaches outlined in this blog for any scenario where building a dynamic URL is necessary. Even when working with non-Dataverse systems, provided that your application supports query parameters, it will be straightforward to mimic the steps here to achieve the same purpose. And, regardless, we have a resulting cloud flow that (I hope) should be pretty easy to understand for any developer. 😉\n","date":"2021-12-19T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/working-with-dynamic-hyperlinks-in-the-outlook-send-an-email-action-power-automate-cloud-flows/","title":"Working With Dynamic Hyperlinks in the Outlook Send an Email Action (Power Automate Cloud Flows)"},{"content":"When working with Microsoft Dataverse for Teams, there will be occasions where you\u0026rsquo;ll need to identify what the environment URL is. For example, if you need to escalate a problem to Microsoft or if you are looking to build out some data visualisation reports using Power BI Desktop. If you navigate into the Power Platform Admin Center and the settings pane for your environment, however, you will notice that the information is not visible within there:\nInstead, we have to go into Microsoft Teams to get this information, as follows:\nSelect the Power Apps app from the navigation bar: The Home tab should open in the main window. Select Build at the top: You should then see the list of all your Dataverse for Teams environments setup. First, select the one that you wish to get the URL for and then select the About tab (this order is essential 😉): When the About tab loads, click on the Session details option: The Environment URL will then be visible within the dialog that loads: Quite a few steps are involved, as you can see. 😅 It’s perhaps understandable why Microsoft tuck this information away so carefully. Apart from the two occasions mentioned earlier, the need to refer to this bit of data will be limited. Don’t forget that most (if not all) other scenarios where this URL is vital, such as from an SDK standpoint, are restricted explicitly as part of Dataverse for Teams. You can consult this handy article on the Microsoft Docs site to find out more. Therefore, your only option to unlock additional capacity and force the Environment URL to become visible within the Power Platform Admin Center is to upgrade the environment/app into a proper Dataverse environment. But that\u0026rsquo;s a topic for another time maybe. 😄\n","date":"2021-12-12T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/getting-a-dataverse-for-teams-environment-url/","title":"Getting a Dataverse for Teams Environment URL"},{"content":"I had a great time recently getting involved as part of the Microsoft Dynamics ERP User Group, where I presented a session asking (channelling my inner Seinfeld in the process) \u0026ldquo;What\u0026rsquo;s the Deal with Schedule API in Dynamics 365 Project Operations?\u0026rdquo;. Off the back of this session, I wanted to do an accompanying blog post that helps to cover off some of the main topics from my talk and as a resource for anyone interested in diving into this subject again or for the very first time. So without much further ado, let\u0026rsquo;s jump in to try and answer this question as best as possible.\nSchedule API Overview Schedule API provides developers working with Dynamics 365 Project Operations the only mechanism to automate actions targeting the platform and ensure operations are replicated correctly between Microsoft Dataverse and Project for the web. Rather than using standard Create, Update or Delete actions targeting the core set of Scheduling Tables within Project Operations, developers must instead leverage the Schedule API to carry out these actions. Indeed, this is, in fact, the only supported way we can do this. When we talk about \u0026ldquo;Scheduling Tables\u0026rdquo;, we mean any of the following Project Operations table types:\nProject Task (msdyn_projecttask) Project Task Dependency\t(msdyn_projecttaskdependency) Resource Assignment\t(msdyn_resourceassignment) Project Bucket (msdyn_projectbucket) Project Team Member (msdyn_projectteam) The Schedule API’s are built using native capabilities that have been available within Microsoft Dataverse for many years, meaning we can call any of the Schedule API’s in the same way we would for any Custom Action or Custom API. With the Schedule API\u0026rsquo;s, we can carry out the desired automation and programmatic actions we want against Project Operations, using familiar tools and without the need to write a complex integration ourselves that will persist data out into Project for the web.\nHow It All Works Because the actions we carry out within Project Operations / Microsoft Dataverse need to be replicated to Project for the web (and vice-versa), Microsoft provides a shared backend service used to process all actions. The following diagram from Microsoft offers an excellent overview of how this backend service works as we target actions against the Schedule API:\nAs part of this, developers don’t need to handle complex technical logic; instead, we can let the Schedule API do all the heavy lifting for us. 😀\nSupported API\u0026rsquo;s At the time of writing this post, the Schedule API supports the following actions:\nCreation of new Projects synchronously (via the msdyn_CreateProjectV1 action) Creation of new Team Members synchronously (via the msdyn_CreateTeamMemberV1 action) Asynchronous Create, Update or Delete actions targeting any of the Scheduling Tables, using the following actions: msdyn_PSSCreateV1 msdyn_PSSUpdateV1 msdyn_PSSDeleteV1 OperationSets Overview As the Schedule API\u0026rsquo;s rely on the backend services mentioned earlier, a mechanism is needed to ensure we can submit these requests and have complete atomicity as part of our transactions. This is provided for as part of OperationSets, a concept worthy of further discussion. In layman’s terms, these provide a wrapper to bundle all our Schedule API requests together. This request is then passed off to the Schedule API to execute the operations we\u0026rsquo;ve indicated. As such, every request we make to the Schedule API must be included as part of an OperationSet, and we must also link all OperationSets to a Project. We achieve all of this via a two-step process:\nInitialise the OperationSet (via a msdyn_CreateOperationSetV1 request) Execute the OperationSet (via a msdyn_ExecuteOperationSetV1 request) Once we execute our OperationSets, this is then processed asynchronously by the backend Schedule API service; therefore, there can sometimes be a slight delay in processing substantial requests.\nWe can view the list of OperationSets created within Project Operations by navigating to the appropriate area within the sitemap:\nFor each row, we can then view details regarding its status, the Project associated with the OperationSet and each request added into the OperationSet:\nAll of this can be incredibly useful from a debugging and diagnosis standpoint.\nSchedule API Usage Scenarios In a nutshell, any scenario requiring you to automate critical operations targeting Project Operations and, specifically, any scheduling tables will be a good candidate for the Schedule API. This will include situations where, for example, you need to:\nBatch Processing scenarios from external systems into Project Operations. Automating the creation of new Projects based on existing template structures. Handling complex Work Breakdown Structure’s (WBS’s) that are not easy to define manually in Project Operations. Working with the Schedule API Because the Schedule API leverages Dataverse actions, we can work with it in a variety of ways, including:\nAny .NET application, using the official SDK assemblies from Microsoft. .NET Dataverse Plug-ins JavaScript / TypeScript, via the Dataverse Web API Power Automate Cloud Flows via the Perform unbound action action step. We can also use tools, such as Postman to test operations targeting the Schedule API. Check out my post from my PL-400 blog series, all about working with the Dataverse Web API to understand how we can do this.\nTo help better understand and work through some of the common ways we can work with the Schedule API, both via C# and Power Automate, check out the demo project on my GitHub repository, which you can download and experiment further with. I\u0026rsquo;ve included outline instructions on how to work with each sample provided - let me know in the comments below or on GitHub if you have any problems working with either of them.\nGotchas \u0026amp; Limitations Helpful that the Schedule API may be, there are a couple of things to keep in mind while you are working with it:\nAll operations must execute as a fully licensed Project Operations user. This means that using a dedicated Application User account is not supported, as it\u0026rsquo;s impossible to assign a license to accounts of this type. My recommendation would be to set up a full user account, which you can then utilise as a service account that all automation targeting the Schedule API\u0026rsquo;s can then run as. Keep in mind that there will be a cost implication for taking the additional license. 😉 Microsoft restricts certain actions targeting specific columns on each of the scheduling tables. For example, we can\u0026rsquo;t write any information to a Project Task Actual Start or Actual End columns. Be sure to consult the complete list of unsupported columns to avoid getting caught out during your development work. Any operation you execute using the Schedule API will be subject to any platform-level limitations that Project Operations enforces, such as the limit on the number of tasks and resources on a project. Again, ensure you\u0026rsquo;ve consulted the relevant Microsoft documentation to avoid any unwelcome surprises. To full the view list of limitations, you can consult the dedicated Microsoft Docs article section devoted to this subject.\nTroubleshooting Schedule API Issues Things can, invariably, go wrong when working the Schedule API. The API\u0026rsquo;s do a pretty good job of identifying and throwing back any issues in your Operation Set Details as you submit them to the platform (e.g. if you provide a column that\u0026rsquo;s not supported with the API\u0026rsquo;s). Still, things can occasionally go wrong as the OperationSets are processed behind the scenes. For when problems occur, we can consult the PSS Error Log (msdyn_psserrorlog) table, which will log any errors that arise alongside the appropriate error code. The information exposed here can be somewhat limited currently, so you may need to escalate each error, along with its CorrelationId, to Microsoft for further investigation.\nAdditional Resources Antti Pajunen remains, as always, my go-to person whenever it comes to anything Project Service Automation or Project Operations, and the following posts are particularly recommended. I\u0026rsquo;m indebted to the first of these in helping to put together the demos in my GitHub repository:\nDynamics 365 Project Operations: Using Schedule APIs with Power Automate and custom project templates D365 Project Operations, Project for the web: Calling Schedule APIs from Power Automate without custom connectors Another good resource you can turn to, which helps evaluate and understand the expected performance of Schedule API, is the Project Schedule API Performance Benchmarks article, which provides some example execution times for a variety of different request types.\nI hope this post has helped remove any doubts or queries you may have had relating to the Schedule API in Dynamics 365 Project Operations. Let me know in the comments below if you have any questions\n","date":"2021-12-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/whats-the-deal-with-schedule-api-dynamics-365-project-operations/","title":"What's The Deal with Schedule API? (Dynamics 365 Project Operations)"},{"content":"The Microsoft Business Application certification landscape has been relatively quiet as of late. Granted, there have been a plethora of updates/refreshes done to exams such as PL-200, MB-240 and others, but the last time we saw new exams land was earlier this year when Microsoft revamped and split out the old Dynamics 365 Fundamentals exam into two separate exams, covering both the Customer Engagement and ERP applications (a good choice, in retrospect). But, aside from that, the new set of role-based certification exams appear to be working well. My only gauge for this is based on the number of LinkedIn posts I see each say announcing success at these exams, which we can take to mean that people (or people\u0026rsquo;s employers, as Microsoft partners 😉) are embracing them with open arms.\nWith all this in mind, it was interesting to see an announcement last week regarding a brand new exam, MB-260: Microsoft Customer Data Platform Specialist, which has an unusual distinction amongst the current set of Business Applications exams; namely, it\u0026rsquo;s the very first exam aligned to a Speciality, as opposed to an Associate or Expert certification. We\u0026rsquo;ve tended to see these types of exams over on the Azure / Microsoft 365 side, covering technology areas that are challenging to fit in as part of a role-based exam, such as Azure Virtual Desktop. And, indeed, as we\u0026rsquo;ll discover, the new MB-260 exam does have some creep into Azure as well. With all this in mind, let\u0026rsquo;s look at what this new exam is all about, what you can expect and how you can start preparing to sit it.\nWhat\u0026rsquo;s MB-260 All About? From the looks of it, the exam aims to \u0026ldquo;plug\u0026rdquo; the gap of a significant area within the current Business Applications \u0026ldquo;stack\u0026rdquo; that either a) doesn\u0026rsquo;t fit within any of the existing set of role-based certifications or b) is too large a topic to address sufficiently/in-depth within any other exam. A glance over the exam\u0026rsquo;s description helps to confirm this view:\nCandidates for this exam implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention.\nCandidates should have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy.\nFrom a technology standpoint, the critical application on show here is Dynamics 365 Customer Insights, a platform we can leverage to bring together our organisations data to analyse and apply AI/Machine Learning techniques in a formalised manner. In the past, candidates of other exams would typically need a basic awareness of this tool to scrape by. Here, things move up several gears, and, as we\u0026rsquo;ll see, detailed knowledge of this application will be essential if you plan to attain a passing mark. For all intents of purpose, this is a Dynamics 365 Customer Insights exam.\nReviewing the Skills Measured My previous statement bears other fruit when we consider the most crucial aspect of any Microsoft exam, the list of Skills Measured. As part of preparing to sit this or any Microsoft exam, it\u0026rsquo;s essential that you read through and understand what the exam is assessing you on and - most crucially - the weighting given to each topic area. In combination, you can use these to determine the amount of time to spend revising various topics and what types of questions you may see on the exam. A detailed review of the Skills Measured list for the MB-260 exam confirms the following:\n100% of the exam weighting, based on the topics covered, involve Dynamics 365 Customer Insights in some description. The highest weighted topic (20-25%) is Create customer profiles by unifying data, which concerns the core activities that we\u0026rsquo;d complete when preparing our data in the tool for the first time, such as setting up matching, implementing merges and configuring relationships, amongst others. Although other technology areas are mentioned, such as Azure Data Factory and Power Query, this is strictly in the context of the Customer Insights application itself. For example, candidates only need to know how to ingest data into the application using Azure Data Factory pipelines; there appears to be little expectation of detailed knowledge of how to configure Data Factory pipelines and other topics beyond this. All of the Skills Measured would appear to align towards the typical steps you would need to complete when deploying a Customer Insights solution for the first time, from design to configuration right through to administration. This makes it essential for candidates to get \u0026ldquo;hands-on\u0026rdquo; experience with the product. To summarise, if you haven\u0026rsquo;t yet spent much time working with Dynamics 365 Customer Insights (like me\u0026hellip;😅), now\u0026rsquo;s the time to start investigating.\nHow to Prepare Microsoft plans to release online learning paths and a Microsoft Official Course (MOC) for this exam, which should land early next year once the exam exits Beta. In the meantime, the only resources I can advise turning to is the official Microsoft Docs page for the product and the following Modules on the Microsoft Learn website:\nReview additional marketing apps Create a unified customer profile in Dynamics 365 Audience insights Ingest data into Audience insights Work with Dynamics 365 Audience insights Enrich data and predictions with Audience insights Note that as part of these modules, Microsoft is still using some outdated terminology; I expect this will get addressed once the new learning paths are added.\nI\u0026rsquo;m Sold. How Can I Sit the Exam? The Beta for the exam is expected to open sometime during December 2021, which is the best period for MCT\u0026rsquo;s or experienced individuals to look at sitting the exam. We can then predict the exam to be moved out of beta sometime in early 2022; this would when I would recommend new or inexperienced individuals with the chosen topic area to look at taking it. Keep your eyes peeled on the exam page to see all relevant updates.\nAdditional Resources Nancy Tandy has also done this excellent article over on the Microsoft Learn Blog, which is well worth a read and provides some information on how you can set up a trial of Dynamics 365 Customer Insights.\nClosing Thoughts I\u0026rsquo;ve been hearing a lot of buzz regarding Dynamics 365 Customer Insights lately on the grapevine. Indeed, from what I understand, many larger enterprise customers are starting to adopt the solution with some enthusiasm and, rather interestingly, organisations and partners who previously have not worked with any of the Dynamics 365 Customer Engagement applications or the Power Platform. With this kind of momentum, it\u0026rsquo;s perhaps unsurprising that Microsoft now needs to formalise a lot of the learning content for the product so that organisations can get appropriately trained to implement and maintain this product once deployed. What will be interesting to see is whether or not we will see other Speciality certifications emerge in the future. There are a lot of areas across the Business Application stack that could benefit from having a real deep dive exam, such as Power Automate. TThe success of MB-260, in terms of number of times the exam is sat (i.e. the financial benefit 🤑), will likely determine the future direction. But I\u0026rsquo;d bet good money on there being at least a few more Business Applications Speciality exams, similar to MB-260, in the months and years ahead.\n","date":"2021-11-21T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/reviewing-the-new-mb-260-microsoft-customer-data-platform-exam/","title":"Reviewing the new MB-260: Microsoft Customer Data Platform Exam"},{"content":"As you might be able to tell from other recent blog posts, I\u0026rsquo;ve been doing lots of work lately with Dynamics 365 Sales and, specifically, the Professional variant of the application. Sales Professional can be best thought of as a \u0026ldquo;lite\u0026rdquo; CRM system, with much of the same type of functionality as we\u0026rsquo;d expect from the full-blown Enterprise application. I\u0026rsquo;ve blogged previously on the subject of differences between the two versions. The only major thing you lose with the Professional application is access to things like Competitors and restrictions on the number of custom tables that you can include as part of your solution. It\u0026rsquo;s worth consulting the licensing guide to break down the differences in detail before making a decision. Still, if you are in the market for your very first CRM system, you can\u0026rsquo;t go far wrong with considering Dynamics 365 Sales Professional.\nAs was the case with last week\u0026rsquo;s jolly jaunt into Dynamics 365 Sales, I was dealing yet again with another unusual requirement. In this case, the organisation in question wanted to have it so that only a single Draft Quote could ever exist for an Opportunity in the system. As part of the solution, some additional automation and reporting requirements relied upon information present within the most current Quote issued to Customers and salespeople in the organisation were, very often, creating multiple Quotes without realising. So we needed an approach that would prevent the duplicates from ever getting made. Duplicate Detection Rules provide a mechanism to discourage users from creating duplicate rows, but users could still override this if they felt mischievous. Therefore, we decided that a server-side solution and, specifically, a C# plug-in would be required to prevent duplicates from being created altogether. As far as I know, this is the only way we can meet such a requirement; answers on a postcard, though, if you think there\u0026rsquo;s a better way. 😉 With all of this in mind then, below is the code that was implemented to achieve the requirement:\nnamespace JJG.MyPlugins { using System; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; /// \u0026lt;summary\u0026gt; /// Blocks the Create or Update action for a Quote, if it\u0026#39;s detected that another Draft Quote exists linked to the same Opportunity. /// \u0026lt;/summary\u0026gt; public class BlockDuplicateQuoteForOpportunity : IPlugin { public void Execute(IServiceProvider serviceProvider) { IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); ITracingService tracer = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); // Check for EntityType and Message supported by your Plug-In if (!(context.MessageName == \u0026#34;Create\u0026#34; || context.MessageName == \u0026#34;Update\u0026#34;) || context.PrimaryEntityName != \u0026#34;quote\u0026#34;) { throw new InvalidPluginExecutionException($\u0026#34;Plug-In {this.GetType()} is not supported for message {context.MessageName} of {context.PrimaryEntityName}\u0026#34;); } tracer.Trace($\u0026#34;Starting execution of {nameof(BlockDuplicateQuoteForOpportunity)}\u0026#34;); // Get the newly create Quote (Create) or the Post Image for the Quote (Update), and the Opportunity lookup value Entity quote = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; EntityReference opportunity = quote.GetAttributeValue\u0026lt;EntityReference\u0026gt;(\u0026#34;opportunityid\u0026#34;); if (opportunity == null) { tracer.Trace($\u0026#34;No Opportunity is present for Quote ID {quote.Id}. Cancelling plug-in execution\u0026#34;); return; } // Attempt to retrieve other Draft Quotes that are linked to the same Opportunity ID we have here tracer.Trace($\u0026#34;Attempting to retrieve other Quote rows linked to Opportunity ID {opportunity.Id}...\u0026#34;); IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); QueryExpression qe = new QueryExpression() { EntityName = \u0026#34;quote\u0026#34;, ColumnSet = new ColumnSet(\u0026#34;quoteid\u0026#34;), Criteria = { Conditions = { new ConditionExpression(\u0026#34;opportunityid\u0026#34;, ConditionOperator.Equal, opportunity.Id), new ConditionExpression(\u0026#34;statecode\u0026#34;, ConditionOperator.Equal, 0), new ConditionExpression(\u0026#34;quoteid\u0026#34;, ConditionOperator.NotEqual, quote.Id), }, }, }; EntityCollection quotes = service.RetrieveMultiple(qe); tracer.Trace($\u0026#34;Got {quotes.Entities.Count} Quotes!\u0026#34;); // If one or more exist, then we throw an error to block the Create / Association if (quotes.Entities.Count \u0026gt;= 1) { tracer.Trace($\u0026#34;Multiple Draft Quotes exist for Opportunity ID {opportunity.Id}. Throwing error to cancel operation...\u0026#34;); throw new InvalidPluginExecutionException(\u0026#34;Draft Quote(s) already exist for the selected Opportunity. Only a single Draft Quote is allowed. Please edit or delete the other Draft Quote(s) before proceeding.\u0026#34;); } else { tracer.Trace($\u0026#34;No other Draft Quotes are linked to Opportunity ID {opportunity.Id}. No action required. Cancelling plug-in execution\u0026#34;); return; } } } } When registering this plug-in into the application, ensure that it\u0026rsquo;s aligned to the Post-Operation step on the following messages indicated below:\nFor the Update step, we also specifically filter on just the opportunityid row, to ensure the plug-in doesn\u0026rsquo;t fire unnecessarily:\nWhen the user then attempts to create or associate more than one Quote to a single Opportunity, this will be the error message they will receive:\nBecause the plug-in throws the InvalidPluginExecutionException error message, the platform will roll back the entire transaction. We can then inject our own custom message into the dialog that appears.\nAs alluded to earlier, having a low/no-code solution to achieve this requirement would be preferred. But, unless I\u0026rsquo;m missing something obvious, doing something similar via a real-time workflow would be impossible due to the RetrieveMultiple request we have to perform to get other pre-existing Quotes. As much as I make a living out of implementing these types of solutions, we should always be cautious of adopting a code-first mindset if other routes are available to us within Dynamics 365 Sales and the Power Platform. Take care to understand the \u0026ldquo;baggage\u0026rdquo; involved with a solution like this so that you don\u0026rsquo;t get caught out in future as part of an upgrade or when you later incorporate additional functionality into the equation.\n","date":"2021-11-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/blocking-duplicate-quotes-in-dynamics-365-sales-professional-enterprise-c-sharp/","title":"Blocking Duplicate Quotes in Dynamics 365 Sales Professional / Enterprise (C#)"},{"content":"After you\u0026rsquo;ve spent any length of time working with the out of the box sales process within Dynamics 365 Sales Professional / Enterprise, you get used to some of the behavioural quirks that can commonly cause you challenges during an implementation. A prime candidate for consideration here concerns the various types of tables used by the application. When we consider these in detail, such as the Opportunity, Quote and Order table, many of the assumptions that we make about how things work are instantly proven wrong, as these tables tend to operate by their own set of rules. For example, the ability to easily map across attributes between the different product line table types (Opportunity Product, Quote Product etc.) becomes a challenge; we must instead revert to using custom code instead to satisfy this requirement. Little things like this can exasperate both new and long-time users of the application and can often be quite frustrating when explaining to organisations using the software. 😅 Notwithstanding these gripes, I still do genuinely believe the Dynamics 365 Sales platform provides a solid base for organisations to leverage out of the box functionality, with the ability to customise further, as needed. And, when we start to bring into the equation more advanced capabilities, via things such as custom pricing plug-ins, you begin to move to an entirely new level when it comes to what we can do with this application.\nA particularly annoying behaviour quirk I dealt with recently involved Quotes marked as Won in the system. The organisation I was working with needed to, on occasions, modify details of a Won Quote after the fact. For example, if the salesperson included the wrong item or the customer changed their order after confirming it. As we can observe in the below screenshot, we have no option on the ribbon to reactivate the Quote once Won in the application:\nThe only \u0026ldquo;out of the box\u0026rdquo; way of dealing with this was to create and re-input all details into a new Quote; not a viable and time-effective solution. Fortunately, there is a faster way we can achieve the objective by working through the following outline steps:\nFirst, we need to change the Status \u0026amp; Status Reason of the Quote back to Draft \u0026amp; In Progress. After we have made modifications to the Quote, it needs to be set as Active again. Once Active, the Quote then must be closed as Won. Microsoft wraps the logic for this within the WinQuote action, which we can call via the SDK or against the Microsoft Dataverse Web API. These steps should be achievable via either a classic workflow or a Power Automate cloud flow automation, which would typically be our first preference for a requirement like this. However, in our case, we wanted to call these steps as part of a Custom API definition, which could then be called via a JavaScript action on a Ribbon button, similar to how I\u0026rsquo;ve described previously on the blog. And, most crucially, we needed all steps to be completed synchronously. So, given these requirements, we had to look at a solution leveraging C# instead.\nWith all of this in mind, let’s jump into the reason why you’re probably reading this post. 😉 To open a Won Quote using C#, we would need to write and execute the following code:\n//TODO: Implement code to generate your IOrganizationService reference. For plug-ins, this would look something like this: //IOrganizationServiceFactory serviceFactory = serviceProvider.GetService\u0026lt;IOrganizationServiceFactory\u0026gt;(); //IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); Guid quoteID = new Guid(\u0026#34;9cc66ad5-2506-4394-854d-f60c35185d96\u0026#34;); Entity quote = new Entity(\u0026#34;quote\u0026#34;, quoteID); quote[\u0026#34;statecode\u0026#34;] = new OptionSetValue(0); // Draft quote[\u0026#34;statuscode\u0026#34;] = new OptionSetValue(1); // In Progress service.Update(quote); Then, once we\u0026rsquo;re ready to return the Quote to its previous state, we\u0026rsquo;d then run the following code to re-close the Quote. Note in particular, at this stage, we need to execute two separate actions and ensure they are completed successfully:\n//TODO: Implement code to generate your IOrganizationService reference. For plug-ins, this would look something like this: //IOrganizationServiceFactory serviceFactory = serviceProvider.GetService\u0026lt;IOrganizationServiceFactory\u0026gt;(); //IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); // We execute everything in a transaction, so we can rollback cleanly on failure ExecuteTransactionRequest transactionRequest = new ExecuteTransactionRequest() { Requests = new OrganizationRequestCollection(), ReturnResponses = true, }; ExecuteTransactionResponse transactionResponse; Guid quoteID = new Guid(\u0026#34;9cc66ad5-2506-4394-854d-f60c35185d96\u0026#34;); // First, we need to Activate the Quote ... Entity quote = new Entity(\u0026#34;quote\u0026#34;, quoteID); quote[\u0026#34;statecode\u0026#34;] = new OptionSetValue(1); // Active quote[\u0026#34;statuscode\u0026#34;] = new OptionSetValue(2); // In Progress UpdateRequest updateRequest = new UpdateRequest() { Target = quote, }; transactionRequest.Requests.Add(updateRequest); // ...with this done, now we can re-close the Quote as Won Entity quoteClose = new Entity(\u0026#34;quoteclose\u0026#34;); quoteClose[\u0026#34;subject\u0026#34;] = \u0026#34;Quote Close\u0026#34; + DateTime.Now.ToString(); quoteClose[\u0026#34;quoteid\u0026#34;] = quote.ToEntityReference(); WinQuoteRequest winQuoteRequest = new WinQuoteRequest() { QuoteClose = quoteClose, Status = new OptionSetValue(-1), }; transactionRequest.Requests.Add(winQuoteRequest); transactionResponse = (ExecuteTransactionResponse)service.Execute(transactionRequest); tracer.Trace($\u0026#34;Quote ID {quoteID} closed as won successfully!\u0026#34;); As stated earlier, if there wasn\u0026rsquo;t a need to execute these actions synchronously, then leveraging Power Automate cloud flows would be something I\u0026rsquo;d actively encourage instead, as both the ability to update and call action steps are supported via this tool. So before you act too gung-ho with the above, validate your approach against the requirements you are working against.\nRegardless of the precise approach you take, it is a minor point of frustration that these types of steps are not natively supported within Dynamics 365 Sales. I understand why the system behaves like this, and there are arguable benefits to having a Quote locked after it\u0026rsquo;s been approved. However, I imagine the ability to make quick edits to a Won Quote is a common requirement across many different organisations, and expecting users to have to go through raising an entirely new Quote is both impractical and ludicrous in equal measure. It\u0026rsquo;s good to know that the underlying platform supports us in building workarounds such as this so that we are not left entirely adrift with a system that must fit around the business instead of the other way around.\n","date":"2021-11-07T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/reopening-won-quotes-dynamics-365-sales-professional-enterprise-c-sharp/","title":"Reopening Won Quotes in Dynamics 365 Sales Professional / Enterprise (C#)"},{"content":"Typically, when handling more complex client-side logic and rules within our model-driven Power Apps, we will invariably need to consider using JavaScript form functions. Of course, we are expected (and I would be the strongest proponent of this) to exhaust the capabilities of Business Rules first before considering this. Still, there will be a variety of scenarios where JavaScript will be the only viable option. For example:\nAs part of our logic, we need to perform operations targeting the Microsoft Dataverse Web API. We have a requirement to display different types of form notifications to users when particular conditions are met. Any situation where we have an external integration that needs to be carried out (although I would argue those scenarios are better suited for Power Automate cloud flows instead) We need to check various properties regarding the currently logged-in user and apply the logic that we need based on, for example, the security roles assigned to them. This last one is a fascinating one, which I\u0026rsquo;d like to focus on further as part of today\u0026rsquo;s blog post. Based on my work with the platform, there are typically two types of scenarios where this comes up. This first is when we need to potentially show or hide ribbon buttons to the user, based on their current role. The second is when we need to perform some adjustment to the form, such as locking/unlocking or showing/hiding columns. For both of these scenarios, we can turn to the Xrm.Utility.getGlobalContext() object to assist further as, within there, we can interrogate further to grab a list of all the users currently assigned security role(s). Pretty neat, I\u0026rsquo;m sure you\u0026rsquo;ll agree. 🤓 We can see an example of how to do this below:\nif (typeof (JJG) === \u0026#34;undefined\u0026#34;) {var JJG = {__namespace: true};} JJG.SampleFunctions = { getUserRolesExample: function (executionContext) { \u0026#39;use strict\u0026#39;; var formContext = executionContext.getFormContext(); //Get current users assigned security role(s) var usersRoles = Xrm.Utility.getGlobalContext().userSettings.roles; var hasRole = false; //Iterate through and determine whether the user has the roles we are looking for - which, in this example, are \u0026#39;Salesperson\u0026#39; or \u0026#39;System Administrator\u0026#39; usersRoles.forEach(function hasRoleName(item, index) { //Check passed in value for role[].name match if (item.name === \u0026#39;Salesperson\u0026#39; || item.name === \u0026#39;System Administrator\u0026#39;) { //match found set return value to true hasRole = true; }; }); //If the user has the correct role, then we can process our desired logic if(hasRole === true) { //TODO: Add your logic here. //If we were using this as part of a ribbon enable / display rule, we could add the following snippet here: //return true; } else { //TODO: Add your logic here... //If we were using this as part of a ribbon enable / display rule, we could add the following snippet here: //return false; } }, __namespace: true }; All you need to do is add this function to a Web Resource, using the instructions from step 2 and onwards in this article, and then apply to the most appropriate event handler on your form.\nWith JavaScript form functions, we unlock a range of additional capabilities that can extend our user experiences in all sorts of directions. Typically, we\u0026rsquo;ll want to avoid going too trigger-happy and writing mountains of code that runs on our forms. But, for specific scenarios such as this and whenever we\u0026rsquo;ve exhausted Business Rules as an option, you have the necessary permission (from me, at least) to start writing some code. 😀\n","date":"2021-10-31T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/evaluating-users-current-security-roles-via-javascript-model-driven-power-app/","title":"Evaluating Users Current Security Role via JavaScript in a Model-Driven Power App"},{"content":"The introduction of role-based exams by Microsoft a few years ago marked a significant change of direction for the certification landscape and also the promise of less disruption in the years ahead. A key challenge of the previous system was that we had to go through the effort and cost of re-certifying to new exams every few years; now, we need to complete a free online assessment to extend our certification before it expires. This new system is not only healthier on our wallets, but also ensures that professionals continually demonstrate their knowledge within their area of specialisation.\nDespite the advantages role-based certification provides, it does mean that all exams need to go through regular changes/updates to reflect how products get introduced, changed, or removed entirely. We can see a great example of this in the PL-200: Microsoft Power Platform Functional Consultant exam, which very recently saw some critical updates. If you\u0026rsquo;re contemplating sitting or have had the exam booked for a while, I\u0026rsquo;d recommend looking at what\u0026rsquo;s changed and to prepare yourself accordingly. With this in mind, let\u0026rsquo;s dive in and take a look at what\u0026rsquo;s new\u0026hellip;\nDesktop Flows Robotic Process Automation (RPA) is a massive area of investment by Microsoft at the moment, and one which Gartner has recognised the organisation for, so it\u0026rsquo;s unsurprising that this now features more heavily in this exam. Specifically, candidates must demonstrate knowledge of:\nidentify use cases for desktop flows including differentiating between attended and unattended desktop flows build web and user interface automations by using Power Automate Desktop implement variables, loops, and conditionals in Power Automate Desktop flows trigger desktop flows from cloud flows monitor automation runs analyze processes by using Process Advisor Previously, I felt you could get away with some basic, theoretical knowledge on this topic, but these changes make it much more essential for you to get \u0026ldquo;hands-on\u0026rdquo; with the product. With that in mind, it\u0026rsquo;s a good thing Microsoft now includes Power Automate Desktop as part of Windows. For additional help in getting to grips with this technology, check out the following Microsoft Learn learning paths below:\nGet started with Power Automate for desktop Automate processes with Robotic Process Automation and Power Automate Desktop Work with Power Automate Desktop Work with different technologies in Power Automate for desktop Chatbots in Power Virtual Agent Out of all the subjects on the exam, and the cause of me failing it in beta, was this topic. You need to have an excellent appreciation of all theoretical concepts relating to Power Virtual Agents, as well as the general steps you need to follow when setting up a chatbot for the first time. Power Virtual Agents, as part of October 2021\u0026rsquo;s changes, remains an important topic, with some notable tweaks as highlighted below:\ncreate a standalone chatbot add standalone chatbots to Teams and other channels create a chatbot within a Microsoft Teams channel authenticate end users for a chatbot With these changes, we take into account the fact that Power Virtual Agent chatbots are embeddable within multiple locations, with candidates expected to demonstrate an appreciation and knowledge for this, including for more complex authentication scenarios that Power Virtual Agents now supports. To summarise, I think the bar has been raised on this topic. So, if you\u0026rsquo;ve previously sat the exam and struggled with this area, things may have just now got a lot worse. 😫 Check out the following learning paths to help you learn more about Power Virtual Agents:\nCreate bots with Power Virtual Agents Create apps, chatbots, flows, and more with Microsoft Dataverse and Teams Microsoft Teams Integration We\u0026rsquo;ve talked about additions to this exam so far, but there is also a topic that has been noticeably culled. The following skills measured, all on integrating model-driven apps within Teams, have been removed:\nAdd apps to Microsoft Teams Create a Teams app from a Power Apps app Create an app directly in Teams Configure app policies Create a Teams channel by using Power Automate If you\u0026rsquo;ve been closely following the Power Platform for a while, the reason for this should be apparent. This capability is now covered as part of Microsoft Dataverse for Teams, which should typically be our first port of call when needing to address scenarios involving Microsoft Teams. As this topic is perhaps more \u0026ldquo;citizen developer\u0026rdquo; focused than being in the domain of a functional consultant, it\u0026rsquo;s understandable why this change has occurred. The only frustrating aspect of this is that none of the other exams, in particular the PL-100 App Maker exam, appear to have been updated to include this as a topic. 👎 Here\u0026rsquo;s hoping this is rectified in the future.\nSolutions This is perhaps the most welcome and overdue change for this exam. 😁 Application Lifecycle Management (ALM) is a vitally important topic, and the cornerstone of any successful solution deployed out to the Power Platform. So naturally, it\u0026rsquo;s essential that any Functional Consultant working with the Power Platform knows how to work with solutions, connection references, environment variables and more. Up to 20% of the exam now covers these topics, with the precise list of skills measured outlined below:\nCreate a solution in a development environment\ncreate solutions to contain solution assets create a publisher add assets to a solution build solution-aware components manage solution component dependencies Transport solutions between environments\nresolve connection references set environment variables export solutions import solutions update solutions configure managed properties run Solution Checker and interpret results Localize solutions\nconfigure currencies enable language packs export and import translations So if you\u0026rsquo;ve never even considered using solutions before in the Power Platform, you really will need to get adequately prepared to address any knowledge gaps. Take a look through the Manage solutions in Power Apps and Power Automate module on the Microsoft Learn website to learn more on this topic.\nAnd the rest\u0026hellip; There are other minor changes that are too small to fit into their own heading. This includes:\nThe addition of Microsoft Dataverse Views, which we now need to understand how to work with. Now needing to have familiarly with the data import wizard, exporting data via Excel and how to work with bulk deletion jobs in Microsoft Dataverse. The removal of Data Flows and how to create charts / dashboards in model-driven apps. Needing to understand how Azure AD group teams work in the context of the Power Platform. This list summarises the most notable ones worth considering, from my perspective anyway. 😅 Be sure to check out the whole outline for the exam yourself, to familiarise yourself with the changes too.\nStudying the list of skills measured for any Microsoft exam is important, particularly when changes occur on the exam itself. I hope this post has been useful in preparing you for what\u0026rsquo;s changed in PL-200. Leave a comment below if you have any questions!\n","date":"2021-10-24T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/reviewing-october-2021-pl-200-power-platform-functional-consultant-exam-changes/","title":"Reviewing October 2021's PL-200 Power Platform Functional Consultant Exam Changes"},{"content":"For a long time, Paginated Reports have remained one of those mystical and hard-to-obtain features as part of Power BI Online, designed to address complex scenarios or migrations involving SQL Server Reporting Services (SSRS). So really, from a Power Platform / Dynamics 365 Customer Engagement standpoint, not something we needed to worry about. However, there have been two important milestones that have changed the landscape considerably:\nFirst, we had the introduction of the Power BI Premium Per User license. This introduction significantly lowers the pricing entry point for this capability, thereby making it available \u0026ldquo;to the masses.\u0026rdquo; Secondly, with the introduction of the Structured Query Language (SQL) endpoint for Microsoft Dataverse, it now becomes possible to execute more complex query types and unlock support for things like Direct Query. Paginated Reports support this endpoint natively, meaning it\u0026rsquo;s incredibly straightforward for us to start authoring reports using SQL. With these crucial changes, it now becomes vital for us to consider leveraging Paginated Reports as part of the solutions we build in the Power Platform. But there is one major challenge - how can we easily let users execute Paginated Reports from our model-driven / Dynamics 365 Customer Engagement apps? And how can we straightforwardly pass through data to filter our reports accordingly? Well, as the title of this post would suggest, let me show you how you can achieve this on the Account table using a mixture of customisation and technical wizardry involving JavaScript:\nFirst, we need to prepare our premium capacity workspace. This is as straightforward as enabling one of the following options below, either on workspace creation or for one you have already: Note that these options will be blurred out unless you have the appropriate license(s). Once you have your workspace ready, take a note of the workspace ID, which is present in the URL - we will need this later on.\nEnsure that you have pushed out a Paginated Report with the appropriate parameter values defined into the workspace above. In this example, I\u0026rsquo;m just using a simple report with a single parameter that is then written out onto the report body: Push out the report to your premium capacity workspace and navigate into it as, once again, we\u0026rsquo;ll need to take a note of the report ID:\nTo allow us to open the reports from a model-driven app, we must use a JavaScript function that calls the Xrm.Navigation.openUrl function, alongside some Retrieve/RetrieveMultiple requests that regular followers of the blog may be familiar with: if (typeof (JJG) === \u0026#34;undefined\u0026#34;) {var JJG = {__namespace: true};} JJG.Ribbon = { paginatedReportEnabledRule: function(primaryControl) { var formContext = primaryControl.getFormContext(); //Only display the ribbon button if the parameter value is present on the form. var accountName = formContext.getAttribute(\u0026#39;name\u0026#39;).getValue(); if (accountName === null) { return false; } else { return true; } }, openPaginatedReport: async function(formContext, evName) { \u0026#39;use strict\u0026#39;; //Only proceed if we have the parameter value present var accountName = formContext.getAttribute(\u0026#39;name\u0026#39;).getValue(); if (accountName !== null) { var workspaceID = null; var reportID = null; //Get the Workspace ID await Xrm.WebApi.retrieveMultipleRecords(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#34;?$select=value\u0026amp;$expand=EnvironmentVariableDefinitionId\u0026amp;$filter=(EnvironmentVariableDefinitionId/schemaname eq \u0026#39;jjg_powerbi_workspaceid\u0026#39;)\u0026#34;).then( function success(result) { workspaceID = result.entities[0].value; }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ); //Get the Report ID await Xrm.WebApi.retrieveMultipleRecords(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#34;?$select=value\u0026amp;$expand=EnvironmentVariableDefinitionId\u0026amp;$filter=(EnvironmentVariableDefinitionId/schemaname eq \u0026#39;\u0026#34; + evName + \u0026#34;\u0026#39;)\u0026#34;).then( function success(result) { reportID = result.entities[0].value; }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ); //Provided Workspace and Report ID are present, open the report in a new browser tab. if (reportID !== null \u0026amp;\u0026amp; workspaceID !== null) { var url = \u0026#34;https://app.powerbi.com/groups/\u0026#34; + workspaceID + \u0026#34;/rdlreports/\u0026#34; + reportID + \u0026#34;?rp:CRMAccountName=\u0026#34; + accountName; Xrm.Navigation.openUrl(url); } else { Xrm.Navigation.openErrorDialog({ details: \u0026#39;Unable to open report as the workspace/report ID cannot be determined. Please contact support\u0026#39;}); } } else { Xrm.Navigation.openErrorDialog({ details: \u0026#39;Unable to open report as the Account has no account number. Please provide a value and try again.\u0026#39;}); } }, openPaginatedReportFromView: async function(selectedRows, evName) { \u0026#39;use strict\u0026#39;; //Get the Account ID from the currently selected row var accountUID = selectedRows[0]; var accountName = null; //Retrieve the Account Name using the above ID await Xrm.WebApi.retrieveRecord(\u0026#34;account\u0026#34;, accountUID, \u0026#34;?$select=name\u0026#34;).then( function success(result) { accountName = result.name; }, function (error) { Xrm.Navigation.openErrorDialog({ details: \u0026#39;A problem occurred while retrieving the Account row. Please contact support.\u0026#39;}); } ); //Provided we have a value, we can continue if (accountName !== null) { var workspaceID = null; var reportID = null; //Get the Workspace ID await Xrm.WebApi.retrieveMultipleRecords(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#34;?$select=value\u0026amp;$expand=EnvironmentVariableDefinitionId\u0026amp;$filter=(EnvironmentVariableDefinitionId/schemaname eq \u0026#39;jjg_powerbi_workspaceid\u0026#39;)\u0026#34;).then( function success(result) { workspaceID = result.entities[0].value; }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ); //Get the Report ID await Xrm.WebApi.retrieveMultipleRecords(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#34;?$select=value\u0026amp;$expand=EnvironmentVariableDefinitionId\u0026amp;$filter=(EnvironmentVariableDefinitionId/schemaname eq \u0026#39;\u0026#34; + evName + \u0026#34;\u0026#39;)\u0026#34;).then( function success(result) { reportID = result.entities[0].value; }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ); //Provided Workspace and Report ID are present, open the report in a new browser tab. if (reportID !== null \u0026amp;\u0026amp; workspaceID !== null) { var url = \u0026#34;https://app.powerbi.com/groups/\u0026#34; + workspaceID + \u0026#34;/rdlreports/\u0026#34; + reportID + \u0026#34;?rp:CRMAccountName=\u0026#34; + accountName; Xrm.Navigation.openUrl(url); } else { Xrm.Navigation.openErrorDialog({ details: \u0026#39;Unable to open report as the workspace/report ID cannot be determined. Please contact support\u0026#39;}); } } else { Xrm.Navigation.openErrorDialog({ details: \u0026#39;Unable to open report as the Account has no account number. Please provide a value and try again.\u0026#39;}); } }, __namespace: true } The result is three functions:\npaginatedReportEnabledRule: This will be used to enable the button if the required parameter value (the Account Name) is present on the Account form. openPaginatedReport: This will allow the user to open the Paginated Report from an Account form. openPaginatedReportFromView: This will enable the user to open the Paginated Report when selecting an Account row from a view. Create these functions as part of a new or existing JavaScript Web Resource within your solution. Note as well with the above script how we are constructing the URL to open our report. We should end up with something like this as a result:\nhttps://app.powerbi.com/groups/6a4f40a8-ec74-4b79-90f0-51056ffc1b9/rdlreports/1ce95376-26b0-4ced-bcd4-abac2c25db82?rp:CRMAccountName=MyAccountName\nYou can consult this handy Microsoft Docs article for additional guidance on this subject.\nThe above script leverages two Environment Variables. Therefore, we also need to set these up within our environment, as indicated below: With all pre-requisites setup, we can now create the buttons on the Account table. As always, we turn to the good ol\u0026rsquo; Ribbon Workbench to assist us with this. The first thing we need to do is ensure we\u0026rsquo;ve got a temporary solution setup containing just our Account table (just the skeleton; no sub-components required): From there, open this in the Workbench and add on two Buttons - one for the form itself and another for the Account views:\nNext, add on two new commands like so - each one should then be tagged back to the previous buttons setup and the Library / Function Name values updated accordingly for your environment:\nNote as well the Enable Rules that need to be added for both commands too:\nPublish your changes once you\u0026rsquo;ve configured everything.\nNow it\u0026rsquo;s time to test. 😉 If we navigate onto the Account form, we should be able to see and test our new button accordingly:\nLikewise, for our Account view:\nBefore wrapping things up, there are a couple of other things to keep in mind with all this:\nGiven the differing access models for Power Apps / the Dynamics 365 CE applications, you will need to make sure users are given access to the premium-capacity workspace for all of this to work. Users working within Power Apps will need Read privileges granted for the Environment Variable Definition table to ensure the JavaScript works when retrieving the Environment Variable values. This solution should support passing through multiple parameter values if required. So as an example, to provide a second parameter value called MySecondParameter with a static value, you would do something like this instead: var url = \u0026#34;https://app.powerbi.com/groups/\u0026#34; + workspaceID + \u0026#34;/rdlreports/\u0026#34; + reportID + \u0026#34;?rp:CRMAccountName=\u0026#34; + accountName + \u0026#34;\u0026amp;rp:MySecondParameter=MyValue\u0026#34;; Xrm.Navigation.openUrl(url); Thanks to the new Premium Per User SKU, it\u0026rsquo;s fantastic that paginated reports are more accessible and affordable than ever before. And, with equal, if not better, performance compared to your standard CRM-based reports, they are worth considering as part of your Power Apps / Dynamics 365 Customer Engagement. I would emphasise this even further if you find yourself needing to write a particularly complex report that you can only author using a complex query or which targets an external system. Hopefully, with the steps outlined in this post, you can very quickly start to get them included as part of your existing model-driven apps. Let me know in the comments below if you have any questions or get stuck setting this up yourself. 😀\n","date":"2021-10-17T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/opening-paginated-reports-from-model-driven-power-apps-dynamics-365-customer-engagement/","title":"Opening Paginated Reports from Model-Driven Power Apps / Dynamics 365 Customer Engagement"},{"content":"When working with Windows Virtual Machines (VM\u0026rsquo;s) in Microsoft Azure, we can be assured that some essential security aspects of our machines configuration are handled for us automatically. An excellent example of this is disk encryption, which Microsoft automatically enables for us at rest on our machines using a platform-managed key. For most scenarios, this default option will serve us best and - most critically - will avail us of any problems in the future, should we decide to migrate our VM\u0026rsquo;s across into different subscriptions. For more comprehensive scenarios, we can instead turn to the Azure Disk Encryption solution, installable via an extension. We can use the following Azure RM template snippet to deploy this out into our resource group:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;vmName\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;volumeType\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;location\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;encryptionOperation\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;keyVaultURL\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;keyVaultResourceID\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;keyEncryptionKeyURL\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; } }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Compute/virtualMachines/extensions\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-10-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[concat( parameters(\u0026#39;vmName\u0026#39;), \u0026#39;/AzureDiskEncryption\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;publisher\u0026#34;: \u0026#34;Microsoft.Azure.Security\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;AzureDiskEncryption\u0026#34;, \u0026#34;typeHandlerVersion\u0026#34;: \u0026#34;2.2\u0026#34;, \u0026#34;autoUpgradeMinorVersion\u0026#34;: true, \u0026#34;forceUpdateTag\u0026#34;: \u0026#34;1.0\u0026#34;, \u0026#34;settings\u0026#34;: { \u0026#34;EncryptionOperation\u0026#34;: \u0026#34;[parameters(\u0026#39;encryptionOperation\u0026#39;)]\u0026#34;, \u0026#34;KeyVaultURL\u0026#34;: \u0026#34;[parameters(\u0026#39;keyVaultURL\u0026#39;)]\u0026#34;, \u0026#34;KeyVaultResourceId\u0026#34;: \u0026#34;[parameters(\u0026#39;keyVaultResourceID\u0026#39;)]\u0026#34;, \u0026#34;KeyEncryptionAlgorithm\u0026#34;: \u0026#34;RSA-OAEP\u0026#34;, \u0026#34;VolumeType\u0026#34;: \u0026#34;[parameters(\u0026#39;volumeType\u0026#39;)]\u0026#34;, \u0026#34;KeyEncryptionKeyURL\u0026#34;: \u0026#34;[parameters(\u0026#39;keyEncryptionKeyURL\u0026#39;)]\u0026#34;, \u0026#34;KekVaultResourceId\u0026#34;: \u0026#34;[parameters(\u0026#39;keyVaultResourceID\u0026#39;)]\u0026#34; } } } ] } In this scenario, as well as installing the above extension onto our VM\u0026rsquo;s, we must also provide an encryption key. As such, we have an additional dependency in this eventuality; a Key Vault resource with the appropriate secret value setup on it. While this does afford us some greater control and security over how our VM\u0026rsquo;s data is encrypted, this route does add a degree of complexity to our solution and can cause us some problems further down the line. Specifically, remember when I said how easy it was to move VM\u0026rsquo;s across subscriptions? In this case, if we\u0026rsquo;re using the above extension, we\u0026rsquo;ll get errors similar to these when we attempt this action:\nThankfully, all is not lost - all we need to do is carry out the following steps to proceed with moving our VM to another subscription:\nUsing the AZ PowerShell module, log in to the Azure subscription where your VM resides: Connect-AzAccount #Login using modern authentication Set-AzContext -SubscriptionId \u0026lt;My Subscription ID\u0026gt; Run the following cmdlet to disable encryption on the machine. Make a note of the following warning as well and ensure that you have shut down any critical applications on the operating system first: Disable-AzVMDiskEncryption -ResourceGroupName \u0026#34;\u0026lt;My Resource Group\u0026gt;\u0026#34; -VMName \u0026#34;\u0026lt;My VM Name\u0026gt;\u0026#34; Proceed with migrating the resources, which should complete without incident. As part of the migration, ensure that you\u0026rsquo;ve included the Key Vault resource containing your encryption key. Once the migration has been completed, run the following set of cmdlets to change your target subscription and then re-enable disk encryption. Similar to disabling encryption, this may reboot the VM and take around 15 minutes to complete: Set-AzContext -SubscriptionId \u0026lt;My Subscription ID\u0026gt; #Should be the Subscription ID where the resources have moved to $RGName = \u0026#34;\u0026lt;My New Resource Group\u0026gt;\u0026#34; $VaultName= \u0026#34;\u0026lt;My Key Vault Name\u0026gt;\u0026#34; $KeyVault = Get-AzKeyVault -VaultName $VaultName -ResourceGroupName $RGName $DiskEncryptionKeyVaultUrl = $KeyVault.VaultUri $KeyVaultResourceId = $KeyVault.ResourceId $VolumeType = \u0026#34;All\u0026#34; Set-AzVMDiskEncryptionExtension -ResourceGroupName $RGName -VMName \u0026#34;\u0026lt;My NM Name\u0026gt;\u0026#34; -DiskEncryptionKeyVaultUrl $DiskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -VolumeType $VolumeType With that done, you can sit back, relax, and be satisfied that you\u0026rsquo;ve completed your migration successfully. 🙂\nAs regular followers of the blog will know, I\u0026rsquo;ve been through a few tricky migrations in the past involving Microsoft Azure, and VM\u0026rsquo;s always seem to be the \u0026ldquo;problem child\u0026rdquo; as part of this. It\u0026rsquo;s good to see that some aspects of managing VM\u0026rsquo;s have improved significantly over time, such as with the introduction of managed disks. However, there will still be problems like this that we occasionally encounter, especially given the myriad of configuration options that could be different across VM\u0026rsquo;s. This \u0026ldquo;known unknown\u0026rdquo; emphasises the importance of having multiple environments for your critical Azure resources so that you can safely test and verify any tricky migration steps, such as the ones outlined in this post, before performing them against any business-critical infrastructure.\n","date":"2021-10-10T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/moving-disk-encrypted-virtual-machines-across-subscriptions-microsoft-azure/","title":"Moving Disk Encrypted Virtual Machines Across Subscriptions (Microsoft Azure)"},{"content":"Typically, whenever you plan to do any automation involving Microsoft Azure, your trusty tool to turn is to the Az PowerShell module and the various cmdlets it has available. Somewhere in this treasure trove, you will almost always find one or two cmdlets available that will allow you to achieve your particular requirement. As a general rule of thumb, any task that we can comfortably carry out within the portal will also be available to do via PowerShell. Well, at least that\u0026rsquo;s what I thought until recently\u0026hellip;\nTo elaborate a bit further - we had a requirement, as part of an Azure DevOps release pipeline, to ensure that all Logic Apps (which, incidentally, were linked to an Integration Service Environment (ISE)) within a resource group could be disabled and then enabled again. The Logic Apps continuously inserted data into Microsoft Dataverse and during a solution deployment, the environment was, naturally, inaccessible; therefore, to ensure no failed Logic App runs, switching them off while the deployment ran became the most obvious solution for us to consider. Sounds straightforward, and surely there must be a PowerShell Cmdlet like Enable-AzLogicApps or Disable-AzLogicApps, right?\nWell, I wouldn\u0026rsquo;t be blogging about it today if it was as straightforward as that. 😁 But yes, unfortunately, there is no simple way of doing this using an Az PowerShell module cmdlet, so we must instead turn to another option instead - the Azure REST API. Fortunately, via this route, we do have the option of both enabling and disabling our Logic Apps and, provided you are comfortable working with PowerShell, we can look to put together a script similar to the one below to achieve this requirement:\nparam( #subscriptionID: GUID representing the Azure subscription to connect to [Parameter(Mandatory=$true)] [String]$subscriptionID, #tenantID: GUID of the Azure Active Directory (AAD) tenant being connected to [Parameter(Mandatory=$true)] [String]$tenantID, #clientID: AAD Client ID for the subscription service principle [Parameter(Mandatory=$true)] [String]$clientID, #clientSecret: AAD Client Secret for the subscription service principle [Parameter(Mandatory=$true)] [String]$clientSecret, #resourceGroup: Indicate the resource group to target [Parameter(Mandatory=$true)] [String]$resourceGroup, #action: Indicate the desired action to perform against the Logic Apps [ValidateSet(\u0026#34;enable\u0026#34;,\u0026#34;disable\u0026#34;)] [Parameter(Mandatory=$true)] [String]$action ) #Build and send the request to obtain a valid Access Token $authParam = @{ Uri = \u0026#34;https://login.microsoftonline.com/$tenantId/oauth2/token\u0026#34;; Method = \u0026#39;Post\u0026#39;; Body = @{ grant_type = \u0026#39;client_credentials\u0026#39;; resource = \u0026#39;https://management.core.windows.net/\u0026#39;; client_id = $clientID; client_secret = $clientSecret } } $result = Invoke-RestMethod @authParam $token = $result.access_token Write-Host \u0026#34;Access Token generated successfully\u0026#34; #Retrieve all Logic Apps within the current environment Write-Host \u0026#34;Attempting to retrieve all Logic Apps in resource group $resourceGroup...\u0026#34; $getLAsParam = @{ Uri = \u0026#34;https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Logic/workflows?api-version=2016-06-01\u0026#34; ContentType = \u0026#39;application/json\u0026#39; Method = \u0026#39;GET\u0026#39; headers = @{ authorization = \u0026#34;Bearer $token\u0026#34; host = \u0026#39;management.azure.com\u0026#39; } } $laList = Invoke-RestMethod @getLAsParam #Iterate through and perform the desired action against each logic app Foreach ($la in $laList.value) { $laName = $la | Select name $laNameVal = $laName.name $actionLabel1 = If ($action -eq \u0026#34;enable\u0026#34;) {\u0026#34;Enabling\u0026#34;} Else {\u0026#34;Disabling\u0026#34;} $actionLabel2 = If ($action -eq \u0026#34;enable\u0026#34;) {\u0026#34;enabled\u0026#34;} Else {\u0026#34;disabled\u0026#34;} Write-Host \u0026#34;$actionLabel1 Logic App with name $laNameVal...\u0026#34; $laActionParam = @{ Uri = \u0026#34;https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Logic/workflows/$laNameVal/\u0026#34; + $action + \u0026#34;?api-version=2016-06-01\u0026#34; ContentType = \u0026#39;application/json\u0026#39; Method = \u0026#39;POST\u0026#39; headers = @{ authorization = \u0026#34;Bearer $token\u0026#34; host = \u0026#39;management.azure.com\u0026#39; } } $actionResult = Invoke-RestMethod @laActionParam Write-Host \u0026#34;Logic App $laNameVal $actionLabel2 successfully!\u0026#34; } Now, a couple of things to note with this script\nYou will need to set up an Application Registration with permission to access the subscription(s) in question, and from here, you can derive the clientID and clientSecret parameter values for the script to work. The action parameter is the bit that controls what the script does - either enable or disable the Logic Apps. This provides the flexibility to run the PowerShell cmdlets as many times as you need within your pipeline. The script will target every single Logic App within the resource group you specify. So watch out. 😉 I\u0026rsquo;m thankful for this great article that talks through how to work with the Azure REST API using PowerShell, and I\u0026rsquo;ve based portions of the above script on the examples shared here. It is a little annoying that we have to author scripts like this to achieve what seems to be a relatively common type of action that we\u0026rsquo;d want to perform against our Logic Apps using PowerShell. Fortunately, the REST API proves to be our saviour and allows us to achieve the requirement while still leveraging the capabilities of PowerShell at the same time.\n","date":"2021-10-03T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/enabling-disabling-azure-logic-apps-using-powershell/","title":"Enabling/Disabling Azure Logic Apps Using PowerShell"},{"content":"It\u0026rsquo;s been great to see the number of people (including myself 😏) who have joined the Microsoft Certified Trainer (MCT) programme over the past two years. Part of the reason for this influx is due to the generous waiving of the entry fee requirements, meaning that anyone with the relevant experience and certifications can now become an MCT at no charge until the end of this calendar year. Given the shortage and number of available opportunities for MCT\u0026rsquo;s across the globe, I would encourage anyone with the relevant experience and qualifications to take this opportunity to expand their C.V. and access some valuable benefits in the process.\nIf you\u0026rsquo;re a newly minted MCT, you need to get your head around the different lab providers for all Microsoft Official Courses (MOC). Lab providers serve an essential function by providing a dedicated environment for our students to complete exercises, familiarise themselves with the technology in question, and, most crucially, work within a system that will not cause any damage if things go wrong. Familiarity with each of the lab providers available is essential as an MCT, as we typically find ourselves working on behalf of multiple clients or learning providers. At the time of writing this post and to the best of my knowledge, three providers are delivering MOC labs on behalf of Microsoft:\nLearn on Demand (AKA Labs on Demand) Xtreme Labs go deploy Each provider has its particular quirks, advantages, and disadvantages, which can take you some time to get to grips with as a new MCT. As part of today\u0026rsquo;s blog post, I wanted to provide an overview of each provider and point towards some of the things to watch out for. I\u0026rsquo;ll also outline how MCT\u0026rsquo;s can go about requesting complimentary access to each respective platform.\nLearn on Demand (AKA Labs on Demand) I first used Labs on Demand as a student back in 2017, meaning that they have been around for a lifetime (in cloud terms at least). And, based on my limited experience so far, they still appear to be a favourite of many learning providers. It\u0026rsquo;s easy to see why, perhaps. As well as providing standard features, such as the ability to monitor and control students labs, Labs on Demand benefit from a series of different utility tools that each lab has available, including:\nVirtual Machine snapshots, allowing students to restore the lab to 2 previous states. Handy if things go catastrophically wrong. Quick keyboard shortcut options for things like ALT + TAB, Windows Key functions, and other shortcuts that may prove challenging to enter without triggering unwanted effects on your local machine. Quick and easy access to a virtual keyboard to enter more complex inputs, if needed. Out of all the providers as well, their support experience is generally fast and responsive. Requests for access into their platform and responding to other issues have always been prompt, which is great to see.\nOne issue (and I can\u0026rsquo;t complain too much, as I was always told never to look a gift horse in the mouth 😅) is that the process for getting into the labs is a little too \u0026ldquo;clicky\u0026rdquo; for my liking. By this, I mean that you have to navigate through many pages to access the complete course list and start your labs. The other providers don\u0026rsquo;t have this issue, thereby allowing you to begin your labs quickly if you\u0026rsquo;re in a hurry. To summarise, Labs on Demand feels very much like the reliable option to turn to, but I would question for how long that remains. To get access to the platform as an MCT, complete the following form. Note that you have to request access each year, as and when you renew.\nXtreme Labs Last year, I first came across Xtreme Labs, as I hadn\u0026rsquo;t previously attended a MOC that used them. From a feature standpoint, I\u0026rsquo;d say that they are just under par with Labs on Demand. In comparison, one significant benefit is that it\u0026rsquo;s super easy to jump into a Lab. All available MOC courses are shown to you as you log in, meaning you can quickly get into the lab you need without any hassle. They also automatically check and extend your access to the platform as you renew your MCT status, which is nice.\nWith regret, that\u0026rsquo;s as good as it gets with Xtreme Labs, and sadly, the platform does not hold up to scrutiny in practice. As part of recently delivering a course, I\u0026rsquo;ve found that the lab machines did not have the latest browser versions installed, such as Edge Chromium or Google Chrome. In addition, each particular lab was its own entirely separate environment, meaning that information saved on the machine was lost as we moved further ahead into the course. Not great, especially for my students. 😔 Here\u0026rsquo;s hoping that things improve over time.\nIn short, I would say Xtreme Labs is my least favourite provider to work with to date, despite them getting some elements of their platform right. See what you think yourself by requesting access into their platform. A word of caution on this - I had to contact them by email in the end to get my access sorted, as the form submissions didn\u0026rsquo;t appear to route to the correct location.\nGo Deploy go deploy are, from what I can gather, one of the newer providers and certainly one to watch with interest. All of the features you\u0026rsquo;d expect are available to MCT\u0026rsquo;s, including the ability to monitor student\u0026rsquo;s lab sessions remotely. The go deploy lab experience benefits from a straightforward interface and, in particular, all labs start within the current browser tab instead of forcing the user to navigate away elsewhere. In addition, we can easily take screenshots of the lab environment, and login credentials for the machine are also presented front and center, which is typically a common issue that students encounter when trying to access their lab for the first time. A minor complaint is that guidance screenshots have to be specifically turned on by the user to become visible; however, given the propensity for these to make the instructions less clear to follow, it\u0026rsquo;s perhaps understandable why go deploy disable these by default. Overall, go deploy provides a pleasant and fluid experience, and I\u0026rsquo;m looking forward to seeing how their platform develops over time. As with all other providers, MCT\u0026rsquo;s can request complimentary access into go deploy\u0026rsquo;s system by completing this form.\nCopy \u0026amp; Paste + Lab Sessions = 😡 Another thing to keep in mind, which is more a criticism of Microsoft\u0026rsquo;s tooling as opposed to the lab providers themselves, is that any course involving writing custom code invariably introduces issues for students as they complete each lab. This is because we can paste in many of the code snippets from the lab instructions. Then, tools such as Visual Studio mess up the code by adding redundant brackets, curly braces, and other \u0026ldquo;helpful\u0026rdquo; auto-complete characters as students innocently paste in these snippets. For any course that involves this (and this is one of the reasons you should work through all labs yourself before training begins 😉), I typically recommend that students paste the code into Notepad first and then copy that into the IDE tool in question. This workaround seems to be the most reliable way to avoid students having problems with their labs and, as I say, is not something that the lab providers can address as it\u0026rsquo;s Microsoft\u0026rsquo;s problem, not theirs.\nI hope this post has given you a good overview of the different lab providers and what to watch out for as a new MCT. Let me know if you have any questions in the chat below!\n","date":"2021-09-26T00:00:00Z","image":"/images/Microsoft-FI.png","permalink":"/lab-provider-overview-for-microsoft-certified-trainers/","title":"Lab Provider Overview for Microsoft Certified Trainers"},{"content":"Deploying a solution in Microsoft Dataverse can often be a scary experience, as you find yourself grappling with all manner of different error messages that could occur at the drop of a hat. As an example, here\u0026rsquo;s a great one that crept up for me recently:\nMessage: \u0026lsquo;jjg_myentity\u0026rsquo; entity doesn\u0026rsquo;t contain attribute with Name = \u0026rsquo;transactioncurrencyid\u0026rsquo; and NameMapping = \u0026lsquo;Platform\u0026rsquo;. MetadataCacheDetails: ProviderType=Dynamic, StandardCache=False, IsLoadedInStagedContext = True, Timestamp=15925078, MinActiveRowVersion=15925078, MetadataInstanceId=66519129, LastUpdated=2021-09-01 12:29:08.553Detail:\nA bit of context may be helpful at this stage - we had created a table within our environment and added a currency column. When you do this for the first time in Dataverse, the platform will automatically create two additional columns onto the target table:\nCurrency (transactioncurrencyid): Used to record the currency to apply for each row. Typically, this will default to the primary currency specified when creating the Dataverse environment. Exchange Rate (exchangerate): Used to show the conversion rate of the row\u0026rsquo;s currency, derived from the Currency row selected when creating the table row. Modifying these fields is generally inadvisable, and, most crucially, it isn\u0026rsquo;t possible to delete these fields once you\u0026rsquo;ve added a Currency column to your table for the first time. Therefore, I would recommend only creating a Currency column if you\u0026rsquo;re 100% certain the business needs it. The reason for this becomes all too apparent when we return to the above error message; in our scenario, we had added and then removed a Currency column to our table. When we deployed our table for the first time, everything worked just fine. When we then tried to deploy again, we started hitting the above error. The presumption is that, because the field exists within our source table, it now expects it to be present within our destination table. However, because no Currency attribute exists on the table, Dataverse doesn\u0026rsquo;t appear to realise this and create the missing columns for us, hence the above error.\nUnderstanding the circumstances behind the error is all well and good, but how do we resolve it? Attempting to delete the two fields listed above within our source environment is impossible - Microsoft grey out the option in the Maker portal, and we get an error message when attempting to remove it in the \u0026ldquo;classic\u0026rdquo; interface:\nThe only viable course of action is to proceed with adding on a \u0026ldquo;dummy\u0026rdquo; Currency row so that Dataverse recognises that these columns need to be created as part of the subsequent import you perform:\nNot ideal at all, I agree but provided you clearly label the field and obscure it from any views, forms, etc., the fix is somewhat palatable. Once you\u0026rsquo;ve added this \u0026ldquo;dummy\u0026rdquo; column to your table, you can attempt the solution deployment again, and everything should go green.\nData modelling is an important activity that you should always perform before spinning up the maker portal and creating your columns for the first time. As this example clearly shows, ensuring that you\u0026rsquo;re 100% confident in your decision to leverage Currency columns will prevent nasty deployment errors like this. Whether this strange behaviour quirk is a bug or not, I\u0026rsquo;ll leave it up to you to decide. 😉\n","date":"2021-09-19T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/resolving-entity-doesnt-contain-attribute-with-name-transactioncurrencyid-errors-microsoft-dataverse/","title":"Resolving \"Entity doesn't contain attribute with Name = 'transactioncurrencyid'\" Errors (Microsoft Dataverse)"},{"content":"I was chatting with the legendary George \u0026ldquo;The Enabler\u0026rdquo; Doubinski a while back, and he very rightly pointed out that a post I did a few years back on connector options involving Power BI Desktop, Dynamics 365 Customer Engagement and Common Data Service Microsoft Dataverse was\u0026hellip;a little out of date. 😅 Two years, as it happens, is a lifetime in the Microsoft Business Applications space, and a lot has happened since the post went live. So let\u0026rsquo;s do a review of the landscape as it stands today (September 2021), see what\u0026rsquo;s available and, most crucially, what\u0026rsquo;s changed. And be aware, if you are reading this in 2023, things could have changed yet again. 😉\nAt the time of writing this post, we have five options available to us:\nThe Microsoft Dataverse Connector The Common Data Service (Legacy) Connector The Dynamics 365 (online) Connector The Web Connector, i.e., directly querying the applications Web API endpoint Using the Power Query (M) Builder community tool Each option has its own distinct set of advantages and disadvantages. In this post, we\u0026rsquo;ll evaluate this in a little more detail to hopefully assist you in determining the most suitable one for your particular situation.\nMicrosoft Dataverse Connector Last year, Microsoft made two key announcements relating to how we can query data from Dataverse:\nA SQL endpoint was released into public preview. This allows us not only to use SQL Server Management Studio (SSMS) to connect to our Dataverse environments but also now supports capabilities such as DirectQuery and Query Folding when connecting via this endpoint. In December last year, the SQL endpoint for Power BI scenarios only entered general availability. In support of this, a brand new connector was released, known as the Microsoft Dataverse Connector As a consequence of this, we see we have the appropriate new connector option available to us within the desktop application:\nIf we select this connector, we then need to specify two options:\nServer URL: The server URL for your Microsoft Dataverse environment. Note that this should exclude the URL scheme (i.e. the https:// bit). For tenants within the United Kingdom, this will typically be in the format \u0026lt;My CRM Instance\u0026gt;.crm11.dynamics.com. Data Connectivity Mode: Similar to when working with any other SQL Server endpoint, we can choose to either fetch and store data locally from Dataverse (Import) or have a persisted connection that will always bring us the latest data (Direct Query). Choose the option that makes sense for your particular scenario. Press OK, sign-in if prompted, and then you should see a Navigator window similar to the one below:\nFrom there, you will then be able to retrieve data from all of the tables available within the instance. Any selected table data will then load into the Power Query model after pressing the Load button. You can also click the Edit button to automatically load the Power Query Editor, thereby allowing you to carry out additional transformations to your data.\nUse the Common Data Service Connector when:\nYou want to align towards where Microsoft is making future investments. You need to leverage Direct Query capability as part of your report. You don\u0026rsquo;t have a requirement to interact with or execute custom Functions/API\u0026rsquo;s from Power BI. For Import mode reports less than 80MB in size, which don\u0026rsquo;t use pagination or leverage tables using the image data type; see below for further details. Common Data Service (Legacy) Connector The Common Data Service Connector (Legacy) is, as the name implies, the \u0026ldquo;old\u0026rdquo; connector, which only came out of public preview in 2019. Only the good die young\u0026hellip; 😭 The Legacy connector does support the latest version of the application\u0026rsquo;s Web API and has several useful options to help model returned data. However, it will not see any further investments from Microsoft, and it\u0026rsquo;s use should be discouraged wherever possible. The connector can be accessed via the Get Data option on Power BI Desktop, as illustrated below:\nOnce selected and, as indicated below, you must then specify the following options:\nServer URL: The server URL for your Microsoft Dataverse environment. For tenants within the United Kingdom, this will typically be in the format https://\u0026lt;My CRM Instance\u0026gt;.crm11.dynamics.com. Reorder columns: If set to true, the connector will return all table data columns in alphabetical order. Add display column: If set to true, Power BI will include an additional column for specific data types to assist with readability. For example, Choice fields will return the Display Label. I recommended that the Reorder columns and Add display column options are always set to true. By pressing OK and, after logging in with an Organizational Account with sufficient privileges to access the instance, you will see a list of tables (entities) that you can select:\nThe Entities folder will display a list of all distinct tables from Microsoft Dataverse, formatted using the table logical name. Any selected table data will then load into the Power Query model after pressing the Load button. You can also click the Edit button to automatically load the Power Query Editor, thereby allowing you to carry out additional transformations to your data.\nUse the Common Data Service Connector when:\nMicrosoft specifically recommend using this connector if your dataset is over 80MB in size, if you need paging support or if you are using image data within your tables. Beyond this, I would not recommend you use this. Despite having some handy options to sort columns by alphabetic order and to return choice labels, more than likely, using the legacy connector will store up problems, as you would need to migrate it across to the Microsoft Dataverse connector in the future. So best to avoid this by being proactive up-front.\nDynamics 365 (online) Connector Before Microsoft released the previous connector into general availability, the Dynamics 365 (online) Connector was the only Dataverse connector for Power BI Desktop. It\u0026rsquo;s accessed in much the same way, from within the Online Sources tab on the Get Data dialog window:\nOnce selected, the options available for selection are spread across the Basic and Advanced radio buttons:\nWithin the Basic tab, you must specify a single option – the full Web API URL from the application. This value can be obtained by navigating to the Developer Resources area within the classic interface and locating the Instance Web API URL. For instances located in the United Kingdom, this will typically be in the format of https://\u0026lt;My CRM Instance\u0026gt;.crm11.dynamics.com/api/data/v9.1/. You also can specify different versions of the API to use, ranging from the following options: v8.0 v8.1 v8.2 v9.0 The Advanced tab allows you to specify additional query parts URL parts as part of the given Web API URL. For example, the screenshot below shows an example of how to use the URL parts to return data from the Accounts entity only: If the Basic options are utilised, then data is returned in the same manner as the Common Data Service Connector. This then allows you to select and preview data from multiple tables before loading it into your model.\nUse the Dynamics 365 (online) Connector when:\nJust like the Common Data Service (Legacy) connector, I would again discourage you from using this connector. I suspect that, over time, this connector will eventually go the way of the Dodo and that the Microsoft Dataverse option will be the only option for us to use instead.\nWeb The Web connector utilises the Web API OData Feed, similar to the Dynamics 365 (online) Connector, but with the scope to fully leverage custom Web API OData queries. For example, let\u0026rsquo;s assume you have the following Web API URL to return filtered Contact entity data:\nhttps://\u0026lt;My CRM Instance\u0026gt;.crm11.dynamics.com/api/data/v9.1/contacts?$select=emailaddress1,fullname\u0026amp;$filter=contains(firstname, \u0026lsquo;Joe\u0026rsquo;)\u0026amp;$orderby=fullname asc\nWe can enter this URL into the From Web connector dialog box to return the results of that specific query:\nData will then load into the Power Query Editor as a JSON object. You must then click the List hyperlink to expand out and return a list of all records within a tabular format:\nTo understand the types of things that you can do with the Web API, I recommend that you take a look through the Microsoft Docs article that is dedicated to this subject.\nUse the Web Connector when:\nYou have previously authored OData or FetchXML queries and wish to re-use them within Power BI Desktop. You are comfortable working with web API\u0026rsquo;s and the Power Query Editor. Similar to the Dynamics 365 (online) Connector, you should anticipate some work to ensure that, for example, option set values are displayed correctly. You require granular control over the various D365CE Web API options. You need to call custom functions, actions, or Custom API\u0026rsquo;s within your Dataverse environment. Power Query (M) Builder The Power Query (M) Builder application is a community plugin, available as part of the freely distributed XrmToolBox. The XrmToolBox brings together several handy tools for Dynamics 365 Customer Engagement administrators, developers, and customisers. Once you have downloaded the XrmToolBox, the Power Query (M) Builder application can be installed by navigating to the Plugins Store within the application:\nOnce installed, you can then use the tool to:\nSelect the entity data that you wish to query from Power BI. This is achieved by defining the fields that you want to return based on an existing table view or by specifying the list of fields to return within the tool. Bring in any existing FetchXML queries and convert them into Power Query M code. Generate M queries for returning entity data and any related Option Set information. The plugin has an intuitive interface that allows you to build your queries by selecting the entity/fields you would like to work with. In the example below, the Accounts I Follow view has been chosen to generate an M query code snippet:\nCode generated within the tool via the GenerateOData and GenerateOptionSets buttons can then be straightforwardly copied across into Power BI by using the Blank Query data source and using the Advanced Editor option to paste in any relevant code:\nUse the Power Query (M) Builder when:\nYou are already using the XrmToolBox daily and are familiar with its core components. You have existing custom views or FetchXML queries that you want to re-use within Power BI Desktop, and you are looking for an easy way to migrate these across. You have a basic awareness of using the Power Query Editor and are looking for a solution that involves you writing a minimal amount of code. I hope this post has been useful in demonstrating the options available to work with your Microsoft Dataverse / Dynamics 365 Customer Engagement data from within Power BI Desktop. Please leave a comment below if you have any questions.\n","date":"2021-09-12T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/evaluating-power-bi-desktop-connector-options-microsoft-dataverse-dynamics-365-customer-engagement/","title":"Evaluating Power BI Desktop Connector Options for Microsoft Dataverse / Dynamics 365 Customer Engagement"},{"content":"Welcome to the third post in my series focused on providing revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Previously, we looked at the things we need to evaluate about the organisation adopting our Power Platform solution. In today\u0026rsquo;s post, we\u0026rsquo;ll round off our discussion of the first exam area (Perform solution envisioning and requirement analyses) by looking at the final two topics:\nCapture requirements\nrefine high-level requirements identify functional requirements identify non-functional requirements confirm that requirements meet an organization\u0026rsquo;s goals Perform fit/gap analyses\ndetermine the feasibility of meeting specific requirements evaluate Dynamics 365 apps and AppSource options to solve requirements address functional gaps through alternate solutions determine the scope for a solution Requirements form the cornerstone of any IT project and can make or break our implementation, depending on how accurately and well-articulated they are. In today\u0026rsquo;s post, we\u0026rsquo;ll look at the different types of requirements we will typically be faced with. Alongside this, we\u0026rsquo;ll dive a bit deeper on the subject of the Dynamics 365 Customer Engagement applications, AppSource, and how these provide our best mechanism to address gaps between what is available in the platform and what we may need to build to deliver the solution we need.\nAs a reminder, the aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nWhat is a Requirement? A requirement is a basic ask or request that the organisation expresses out of any IT solution we build. As solution architects, we won\u0026rsquo;t typically have much involvement as part of generating requirements; however, we will have an input (and stake) in reviewing and ensuring that requirements meet the four litmus tests outlined below:\nClear: Anything we ask for in life needs to be expressed clearly. If I enter a restaurant, ask for a sandwich and then get annoyed when a Ham sandwich arrives, then it\u0026rsquo;s my fault. Requirements are much the same, and we should push back whenever the organisation comes to us with a poorly expressed or potentially confusing ask. Actionable: A well-defined requirement will clearly outline the steps needed to achieve a result. Feasible: A requirement needs to be achievable, in both practical and technical sense. A requirement to build a rocket in 2 weeks to fly to the moon is very clearly defined but hardly possible or achievable. For these situations, we can leverage our experience of previous projects/implementation to highlight requirements that will be challenging or near impossible to implement. Testable: We can only verify that we can meet a requirement by performing the action involved as part of a test - whether this is done manually or automated. By ensuring our requirements meet the four objectives above and, as solution architects, by highlighting this early on, we can avoid potential problems down the road and less anguish for our development team when they are reviewing them for the first time. Challenge the organisation or requirement author as much as possible until you\u0026rsquo;re satisfied that what is being asked for is well understood and accurately captured.\nFunctional vs Technical Requirements As solution architects, we will typically be faced with two kinds of requirements - functional and technical requirements. Functional requirements, as the name implies, will always express a desired function or capability that the solution needs to provide. Here\u0026rsquo;s an example of a functional requirement:\nAs a sales manager, I want to approve all quotations before they are sent to customers, so that I can ensure all pricing details, discounts, and products being sold are correct.\nFor a functional requirement, we will typically inhabit a persona within the organisation. By this, we don\u0026rsquo;t mean that you will become a new initiate of the Phantom Thieves; instead, a persona will always align to a particular role within a team or division. From there, we can then express the desired behaviour that the system needs to support before then describing the value of the solution. Notice that throughout this, we\u0026rsquo;re not making reference to a technology or a Power Platform feature. This is a common pitfall we must avoid when structuring our functional requirements by ensuring we leave them as agnostic as possible. A meaningful functional requirement will always be structured in the manner indicated above before being broken down further once development work begins.\nTechnical, or non-functional requirements, in comparison, will typically relate to a metric or specific feature that the solution must align to as part of its day-to-day usage. For example, a typical technical requirement for any Software as a Service (SaaS) solution is to ensure high uptime (availability) during core business hours. Another could relate to where data is stored, as our organisation may mandate that all data is held within a specific region. In most cases, these types of requirements may already be covered as part of what is built out already in the Power Platform, but there may be situations where we need to configure aspects of our solution (for example, by ensuring we align our Microsoft Dataverse environment to the correct geographic location) or build out a technical requirement entirely from scratch (e.g. based on compliance or regulatory non-functional requirements, we need to create and then secure specific columns in the application using column security profiles). We should always evaluate non-functional requirements closely so that we can adequately gauge any effort we need to direct towards \u0026ldquo;filling the gap\u0026rdquo;\nAligning Requirements to a Measurable Output To help make your requirement more robust and, crucially, testable, we should take steps to attach a measurable output to a requirement. Let\u0026rsquo;s return to the example requirement we looked at earlier:\nAs a sales manager, I want to approve all quotations before they are sent out to customers, so that I can ensure all pricing details, discounts, and products being sold are correct.\nOne way we could improve this further is by adding a measurable output to the final portion:\nAs a sales manager, I want to approve all quotations before they are sent out to customers so that I can ensure at least 90% of quotations raised have the correct pricing details, discounts, and products applied when first generated.\nWith the additional, measurable element in our requirement, we\u0026rsquo;ve can then further refine the steps we need to follow to achieve the requirement. For example, we could start to determine the common factors that cause a quotation to be rejected and build these validation steps into the system before the quotation even appears in front of the sales manager. In addition, we can look to test this particular measure as part of a User Acceptance Testing (UAT) round by ensuring that this threshold is always exceeded. Consider the applicability of a measurable output when evaluating a requirement for the first time and, if prudent, suggest adding one in where there\u0026rsquo;s a good case to do so.\nWhat if we can\u0026rsquo;t meet a requirement? I get it - when the organisation is pressuring for a solution to be built and when we\u0026rsquo;re \u0026ldquo;in the trenches\u0026rdquo; with a particular project, trying to meet all the requirements is a noble ambition. But it can often be the downfall of many projects and means we ignore the critical rule we discussed earlier - the feasibility of a requirement. A non-feasible requirement could be due to various reasons and often for circumstances outside of our control. The solution architect needs to be adept at spotting problematic requirements early on and suggesting mechanisms through which a requirement can be adjusted or dropped entirely. The earlier, the better for this conversation, as we want to avoid sudden blockers emerging in our project due to a lack of foresight. The conversation may be a difficult one for the organisation to accept, so be prepared for pushback and, if at all possible, prepare for a Plan B. For situations where a requirement is impossible to achieve, due to the time, effort, and resources involved, there are two avenues a solution architect can stroll down to try and find this alternate approach - one of the Dynamics 365 Customer Engagement applications or AppSource.\nDynamics 365 Customer Engagement Apps Overview A major benefit of using the Power Platform and Microsoft Dataverse, in particular, is that we can install core Dynamics 365 applications on top of these environments and leverage pre-built functionality addressing several common scenarios. These set of application, commonly referred to as the customer engagement apps, include the following:\nDynamics 365 Sales: Designed for organisations selling to business (B2B) and consumers (B2C), the application provides a range of sales force automation capabilities. There are two versions of Dynamics 365 Sales available - Professional, aimed at smaller deployments, and Enterprise, which includes access to more advanced components, such as Competitors. Dynamics 365 Customer Service: For scenarios where you need to make cases/incidents for your customer base, this application will meet and exceed your needs. With a range of features allowing organisations to implement service level agreements, entitlements and basic/complex routing scenarios, there is plenty available here that we can implement quickly and easily. Dynamics 365 Field Service: The Customer Service application is well designed for when you are providing remote support to your customers. However, for situations where you need to deliver on-site services, the capabilities available as part of the Field Service application may be worth consideration. With advanced scheduling capabilities, the ability to process on-site jobs via work orders and a dedicated mobile application for field staff, the solution is ideal if your organisation delivers on-site services frequently. Dynamics 365 Marketing: The Sales application includes some basic marketing features \u0026ldquo;out of the box\u0026rdquo;. But, for situations where you need to manage events, execute complex customer journies or perform advanced segmentation of your customer base, the Dynamics 365 Marketing application is the best 1st party application to turn to. Dynamics 365 Project Operations: As the successor product to Project Service Automation, Project Operations provides capabilities for us to manage internal project delivery capability, from initial enquiry through to execution and final billing. Customers can choose from a range of deployment options for the product, such as the \u0026ldquo;lite\u0026rdquo;, Dataverse-only deployment or the stocked/production-based deployment that integrates alongside Dynamics 365 Finance. A solution architect should have a good grasp of the core capabilities within each of the above applications. In addition, if there is an opportunity to leverage 50% or more of the functionality available natively within these apps, we are expected to then guide the organisation towards adopting these solutions as the first preference over any bespoke development work. The monetary cost may appear to be higher on paper, but the development and ongoing time maintenance cost may, in some cases, grossly exceed this. Note that there are other Dynamics 365 applications available, such as Dynamics 365 Finance; these applications don\u0026rsquo;t reside within Microsoft Dataverse and, for the exam, are not a major concern.\nAppSource Overview Microsoft AppSource is the official \u0026ldquo;marketplace\u0026rdquo; for finding 1st and 3rd party add-ons or solutions, targeting not only the Power Platform but also other services such as Microsoft 365 too. Using AppSource, administrators can browse the online catalog to find applications and consulting services from different Microsoft partners. Solutions on offer will typically have a free trial of some description, and a review/feedback system is available, therefore providing the assurance that you can trial and verify that a solution will meet your requirements before committing financially. Where possible, organisations should try and align towards preferred solutions, if one is available, as this provides additional assurance that Microsoft has vetted it and the partner\u0026rsquo;s expertise in addressing a particular requirement. AppSource is accessible via any web browser and requires that you sign in using the Microsoft 365 / Azure Active Directory Account where your Power Platform environment(s) exist.\nDemo: Working with Dynamics 365 Applications and AppSource To better understand how to work with AppSource and the Dynamics 365 Applications, take a look at the video below:\nBespoke Development: When and Why? The key concern of a solution architect during a Power Platform project is to reduce the need for bespoke development and, as much as possible, ensure we align a solution towards \u0026ldquo;out of the box\u0026rdquo; functionality. There are a few important reasons why we do this:\nComponents that have been bespoke developed, particularly involving programming languages such as C# / .NET, can become challenging and expensive to maintain in the long term. Typically, the individuals who build out bespoke components will have gone off into the sunset to another company or project, meaning we don\u0026rsquo;t always have the luxury of picking up the phone and parachuting them in when we encounter a problem. Given that Microsoft mandates we take two major releases every year, our testing cycles for each release become more difficult for us to complete, as we have to perform more in-depth testing each time. Our problems are then compounded further if our bespoke development relies on application features that have been altered or removed entirely. I could go on with more reasons, so hopefully, you get the picture. However, there will be occasions where bespoke development is needed. And this is perfectly fine to consider for your project, provided you have exhausted all other avenues; namely, the following list that we talked about in a previous post:\nDynamics 365 Customer Engagement Apps: As we should already know, all of these applications (also called the \u0026ldquo;customer data platform\u0026rdquo; apps) are built upon and leverage the Power Platform extensively. As part of this, we have a range of applications that are designed to meet a variety of standard business requirements, ranging from customer service to managing complex projects. All of these applications rely on the Microsoft Dataverse as the backend data source, meaning it\u0026rsquo;s incredibly straightforward to factor them in as part of your overall solution. We will look at all of these apps in detail later on in the series. AppSource: Within the Microsoft and partner eco-system, there are various additional solutions for organisations to explore and leverage alongside their Power Platform solution. AppSource will be our primary destination to find and install these into our environment(s). The added benefit is that all of these solutions will typically contain a free trial of some description, thereby allowing us to validate their suitability for our project fully. We\u0026rsquo;ll dive into AppSource in further detail later on in the series. Third-Party Solutions: The definition of this would appear to be somewhat unclear, but I would understand this to be any solution available that may sit external to the Power Platform or one that is not tied to any financial cost to obtain. For example, it could be that you need to include a non-Microsoft application or leverage a Power Apps Component Framework (PCF) control available as part of the PCF Gallery to meet your overall requirements. The exact solutions used will be particular to each project. ISV Solutions: Typically, most committed Independent Solution Vendors (ISV\u0026rsquo;s) will have their appropriate offerings available on AppSource. But there may be occasions where this is not practical due to their solution\u0026rsquo;s complexity, size, or deployment nature. It\u0026rsquo;s impossible to cover or, indeed, identify (unless any ISV wants to pay me 😁) where to find and locate these. Still, there will often be an occasion to engage one or several ISV\u0026rsquo;s to leverage a solution they have built. The typical evaluation trajectory as Solution Architect will follow the list above (i.e. start with Dynamics 365 first and ISV solutions last). And, provided that we have gone through this and eliminated all options, we can then advise building a bespoke solution to meet our requirements.\nA solution architects role will be to tell when a requirement needs bespoke development and, as part of this, to articulate why we can\u0026rsquo;t meet the requirement via one of the other avenues listed above. From there, we should drive efforts towards considering \u0026ldquo;low-code\u0026rdquo; solutions, involving components such as Power Automate flows, rather than \u0026ldquo;pro dev\u0026rdquo; solutions, such as ones involving C# plug-ins.\nEvaluating the Scope of our Solution Often, our Power Platform solution will majorly impact an organisation and affect many departments, divisions, or business areas. Alongside this, there will then be the need to consider other, existing IT components to bring into the equation as well - either as components that we will replace or that will happily co-exist alongside our Power Platform solution once it goes live. This \u0026ldquo;scope\u0026rdquo; can also have a significant influence on the overall architecture of our solution. Let\u0026rsquo;s consider the following scenarios:\nThe organisation wishes to surface data from their on-premises SQL Server database into a canvas Power App. Moving the database to the cloud is not possible. In this situation, involvement of the on-premise data gateway becomes necessary to address the requirement. We then need to consider factors, such as the deployment location, resiliency, and potential network configurations for the gateway to be deployed successfully. Our organisation, based globally, requires specific residency requirements for all data used in our solution. Data must reside strictly within the distinct geographies that the organisation and the users operating our solution reside in. Based on this requirement, we would need to carefully review the list of available regions for our Power Platform environments and set up the required number, based on the above. This can have far-reaching impacts, ranging from our data migration to the Application Lifecycle Management (ALM) approaches we need to implement. Our solution needs to integrate alongside several bespoke API\u0026rsquo;s built by the organisation, using SOAP and other \u0026ldquo;legacy\u0026rdquo; technologies. We need to ensure that these endpoints can be leveraged straightforwardly from all core Power Platform components. From a technical standpoint, several custom connectors will be necessary to meet this requirement. When evaluating this from the architecture side of things, we then need to consider documenting the various endpoints, their purpose, the data that flows between them and the potential risks involved in sharing data between the different endpoints. Our base requirements, if well organised, will allow the solution architect to get a good grasp of the overall scope of our solution. A smaller \u0026ldquo;playing field\u0026rdquo; for our project will, obviously, make our lives easier\u0026hellip;with the opposite being as we\u0026rsquo;d expect. 😉 As solution architects, having a keen understanding of this and, most crucially, of the potential challenges that may emerge will be vital.\nHopefully, after reading today\u0026rsquo;s and the two previous posts so far in the series, you have a good grasp of the vital early preparation activities a solution architect will typically need to be involved in. Tune in next time when we\u0026rsquo;ll move on to see how we can begin designing a Power Platform solution and dive into some detailed topics such as solution topology approaches and how to build an ALM process using Azure DevOps.\n","date":"2021-09-05T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-capturing-requirements-and-performing-fit-gap-analysis-for-power-platform/","title":"Exam PL-600 Revision Notes: Capturing Requirements and Performing a Fit/Gap Analysis for the Power Platform"},{"content":"If you\u0026rsquo;re a regular follower of the blog, you may be wondering why things are looking a little different. That\u0026rsquo;s because I\u0026rsquo;ve completely replaced my WordPress site with a brand new version, using the Hugo framework. Read on to find out more.\nMy Issues with WordPress I\u0026rsquo;ve been seriously struggling with WordPress over the past two years. I did a bit of a brand overhaul on the site, using a modified version of the Divi Theme. Nothing improved (in fact, I think it worsened the problem). In addition, I chose to move my hosting provider across to Microsoft Azure. The logic was pretty simple - as a Microsoft technology enthusiast, I should be \u0026ldquo;eating my dog food\u0026rdquo; and using the best cloud provider out there, right? Well, as it turns out (either due to my ineptness or issues with the platform itself), Azure App Service with WordPress and MySQL Managed Databases doesn\u0026rsquo;t perform well. And, to add insult to injury, they can be quite a money pit, too\u0026hellip; 😲 Other issues I\u0026rsquo;ve had include:\nA plug-in that I used for embedding code snippets in my blog posts is no longer supported/maintained. Consequently, issues have started to occur with it from website users. It\u0026rsquo;s beyond my capability to try and resolve it. The old site was just too damn slow. Typically, I had to wait around 20-30 seconds or more for pages to load in the backend CMS system. In some cases, posts were also taking a similar amount of time to load. Bad experience for all concerned and probably a significant issue from an SEO standpoint. Hackers and unscrupulous individuals attempting to poke holes into the website, spamming it and (presumably) attempting to crash it in the process. Hardening the security on the site via plug-ins such as WordFence seemed to help, but it\u0026rsquo;s still going on behind the scenes and negatively impacting the site. Frequent timeouts and periods where the website will stop working. The only way to resolve this is to restart the App Service on the Azure Portal. Rather annoying and, again, beyond my technical capability to investigate and resolve. Comments and discussions regarding my posts seemed, in most cases, to be virtually non-existent. The requirement to set up an account and having moderated comments seemed to be the cause of this. So, in short, time for a change. And a drastic one at that.\nOptions I gave it some thought, and it came down to one of four options:\nKeep as is, try to fix the issues and look at ways to reduce the monthly hosting cost. Migrate the entire WordPress site to a different hosting provider. Migrate across to a different type of blogging site while still retaining hosting on Azure. Start again with a brand new website - preferably hosted on Azure. Option 1 I deemed impossible due to time, knowledge and from what I gathered from online research. Option 2 was compelling, but I\u0026rsquo;m too idealistically attached to Azure, I think. And, finally, option 4 would have been most straightforward, but having to lose all of the content that I\u0026rsquo;ve built up over the years seemed like such a massive waste.\nWhat I Did So as you\u0026rsquo;d expect, I went with option 3 and migrated the website across to Hugo. The benefits of this were compelling:\nI could continue to use Azure to host my website. In this case, I can now use Azure Static Sites, which have a free hosting tier. 😁 No need for a MySQL database or any other kind of backend infrastructure to maintain. I could start to use GitHub and GitHub Actions to store my site content and deploy out the changes as they get applied to the repo. This was super easy to set up. I can author all pages and blog posts using Markdown. Therefore, writing rich text content and including code snippets becomes a cakewalk. The new site is ridiculously fast. Seriously. Page loads take less than a second on average to complete, so no more long waits! 🙌 The template I\u0026rsquo;m using is clean and well optimised, regardless of the device you are using. I can build a local copy of my website in a matter of seconds, preview how changes look and get these pushed out in minutes. Very cool. I could go on. But I\u0026rsquo;m seriously impressed, excited and invigorated by what I\u0026rsquo;ve worked with so far, and I think I\u0026rsquo;ve now got a solid base to start developing the site further in future.\nExcuse My Mess Now, the new site is not perfect. And I have rushed to migrate it across as quickly as possible. Here\u0026rsquo;s a couple of things to be aware of:\nThe new site has an RSS feed, but note that the URL is slightly different to WordPress. You may need to update your feeds accordingly if you\u0026rsquo;ve been using them. There is no option to subscribe to new blog posts via emails currently. I\u0026rsquo;m looking at ways in which I can build this out. I\u0026rsquo;m still in the process of working through over 290+ previous blog posts and fixing issues such as incorrect URL paths, code snippets not rendering correctly, and wrong categories/tags/images showing on posts. I\u0026rsquo;ll get these fixed over the next week or so but, until then, some posts older from 2020 backwards may have some issues. Comments made on posts from the WordPress site have not been ported across, and I don\u0026rsquo;t think it\u0026rsquo;ll be possible to do this easily. There are around 256 comments from the old site, so I\u0026rsquo;ll work through them and update each post with anything pertinent. There are some general branding/feature tweaks I\u0026rsquo;d like to perform on the site, which will surface over time unless I get distracted\u0026hellip; 😅 If you spot anything else or have any suggestions, please let me know using the fancy new Disqus commenting feature below. But whether you\u0026rsquo;re a longstanding CRM Chapper (if that\u0026rsquo;s a word?) or just checking the site out for the first time, I hope you find the experience quick and easy to use!\nAcknowledgements I\u0026rsquo;m hugely grateful to Kendra Little\u0026rsquo;s excellent blog post, where she talks through doing the same thing as I\u0026rsquo;ve done for her own website. Her steps allowed me to replicate and work through issues I faced during my migration. If you\u0026rsquo;re having similar WordPress pains at the moment, take a read and see if you want to migrate as well. 😉 Some further shoutouts to specific tools mentioned in her post:\nlonekorean\u0026rsquo;s wordpress-export-to-markdown Tool: This tool worked as Kendra described and helped to streamline the migration. CaiJimmy\u0026rsquo;s hugo-theme-stack Theme: This simple Theme works well and optimised for the type of content I want to produce. Thanks, Jimmy! ","date":"2021-09-02T00:00:00Z","image":"/images/Stock_Goodbye.jpeg","permalink":"/ive-thrown-away-my-wordpress-site-heres-why/","title":"I've Thrown Away My WordPress Site. Here's Why."},{"content":"Access Team Templates is a feature that has been available within the Microsoft Dataverse / the Dynamics 365 Customer Engagement applications for many years now, but one which I don\u0026rsquo;t think is actively used (at least, based on the projects I\u0026rsquo;ve been involved in). For situations where you have more unusual access requirements for a particular table type, they provide a flexible mechanism of allowing a specific group of users to have a set of access permissions targeting one row but a completely different set for another. This is achieved by enrolling each user into a unique team created for each row, which grants the permission set defined as part of the template. Access Team Templates are a powerful feature but do come with some baggage and gotchas. Depending on the number of rows in your system, you could end up with many thousands of different Access Teams setup, all of which required a degree of administration. In addition, each table type is limited to a maximum of two access team templates. It\u0026rsquo;s also impossible to include templates as part of a solution or migrate them easily using methods such as the Configuration Migration tool. When you\u0026rsquo;re considering using them, keep these in mind to avoid any potential annoyance further down the road. 😉\nAs part of a recent project, we were actively using Access Team Templates. One of the above gotchas hit us quickly: the inability to migrate Access Team Templates across environments. Most crucially, we wanted to retain the Globally Unique Identifier (GUID) value to support some integrations that we had built out. Due to the limitations outlined above, we needed an alternate solution to straightforwardly import a single Access Team Template within an Azure Pipeline deployment step. The solution we opted for was a PowerShell script, leveraging the very excellent Microsoft.Xrm.Data.PowerShell module. Here\u0026rsquo;s the script in full:\nparam( #objectTypeCode: Unique Code that identifies the table in the environment for the Access Team Template. Always potentially different. [Parameter(Mandatory=$true)] [int]$objectTypeCode, #atName: Name of the Access Team Template [Parameter(Mandatory=$true)] [String]$atName, #accessRights: Number which represents the access rights defined for the template. Refer to this article for details on how to construct: https://docs.microsoft.com/en-us/dynamics365/customer-engagement/web-api/accessrights?view=dynamics-ce-odata-9 [Parameter(Mandatory=$true)] [int]$accessRights, #d365URL: URL of the environment to connect to [Parameter(Mandatory=$true)] [String]$d365URL, #clientID: AAD Client ID for the Application User linked to this environment [Parameter(Mandatory=$true)] [String]$clientID, #clientSecret: AAD Client Secret for the Application User linked to this environment [Parameter(Mandatory=$true)] [String]$clientSecret ) #Install dependencies Install-Module Microsoft.Xrm.Data.PowerShell -Scope CurrentUser -Force #Connect to the D365 environment $conn = Connect-CrmOnline -ServerUrl $d365URL -ClientSecret $clientSecret -OAuthClientId $clientID -OAuthRedirectUri \u0026#34;http://localhost\u0026#34; #We first attempt to retrieve the rows if they already exist, and update it accordingly; if this errors, then the row does not exist, so we need to create it instead Write-Host \u0026#34;Processing Access Team Template for $atName...\u0026#34; try { $atTemplate = Get-CrmRecord -conn $conn -EntityLogicalName teamtemplate -Id \u0026#34;44396647-CEDF-EB11-BACB-000D3A5810F2\u0026#34; -Fields teamtemplateid,teamtemplatename,objecttypecode,defaultaccessrightsmask,issystem $atTemplateId = $atTemplate.teamtemplateid Write-Host \u0026#34;Got existing Access Team Template row with ID $atTemplateId!\u0026#34; $atTemplate.teamtemplatename = $atName $atTemplate.objecttypecode = $objectTypeCode $atTemplate.defaultaccessrightsmask = $accessRights $atTemplate.issystem = 0 Set-CrmRecord -conn $conn -CrmRecord $atTemplate Write-Host \u0026#34;Successfully updated Access Team Template row with ID $atTemplateId!\u0026#34; } catch [System.Management.Automation.RuntimeException] { Write-Host \u0026#34;Access Template row with ID $atTemplateId does not exist, creating...\u0026#34; $atTemplateId = New-CrmRecord -conn $conn -EntityLogicalName teamtemplate ` -Fields @{\u0026#34;teamtemplateid\u0026#34;=[guid]\u0026#34;{44396647-CEDF-EB11-BACB-000D3A5810F2}\u0026#34;;\u0026#34;teamtemplatename\u0026#34;=$atName;\u0026#34;objecttypecode\u0026#34;=$objectTypeCode;\u0026#34;defaultaccessrightsmask\u0026#34;=$accessRights;\u0026#34;issystem\u0026#34;=0} Write-Host \u0026#34;Successfully created new Access Template row with ID $atTemplateId\u0026#34; } Write-Host \u0026#34;Script execution finished!\u0026#34; Now, a couple of things to note with this:\nTo run this script, you will need to have an Application User setup with the appropriate Client ID / Secret setup. I recommend that the secret value be saved as a secure parameter when using it from your pipeline. The accessRights parameter value needs to be a sum of all the potential access rights you wish to grant, based on the numbers shown in this article. For example, if you wanted to assign Read, Append and Append To permissions, the number you would supply would be 21 (1 + 4 + 16 = 21) You may need to replace the ID value (in this scenario, 44396647-CEDF-EB11-BACB-000D3A5810F2) to match the ID of any existing template you\u0026rsquo;ve created in your Dev system. The objectTypeCode is the unique integer value that Dataverse assigns to each table in the environment. For out of the box tables, such as Account, Contact etc., the value will always be the same, regardless of the environment. However, custom tables are not subject to this rule, so take care and use tools such as the Metadata Browser to verify before deploying. Other than that, the script works pretty reliably and, most importantly, can be called as part of any PowerShell task during your build or deployment steps.\nMany features available as part of Microsoft Dataverse today have been around since the very early days of Microsoft Dynamics CRM. Access Team Templates is one of these features and one that we shouldn\u0026rsquo;t be too hasty to discard entirely as being unsuitable or \u0026ldquo;too legacy\u0026rdquo; to consider using today. It is very much the right solution to consider for specific situations and one that Microsoft will actively encourage you to consider as well. It\u0026rsquo;s just a shame that some of the tooling available today doesn\u0026rsquo;t easily support the ability to move Access Team Templates between environments. If you get a spare second, I\u0026rsquo;d encourage you to upvote this Idea on the Microsoft Dynamics 365 site to increase the likelihood that the product team will address this gap for us.\n","date":"2021-08-29T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/migrating-microsoft-dataverse-access-team-templates-via-azure-devops/","title":"Migrating Microsoft Dataverse Access Team Templates via Azure DevOps"},{"content":"There will be occasions when, as you build out client-side scripts involving a model-driven Power App, you will need to execute a specific bit of logic against every control against a particular form. For example, you might want to enable, disable or toggle the visibility of one or many different controls. Microsoft provides us with a getControl method, which performs exactly as its name implies and, based on the argument value we provide, will allow us to return a specific control for us to work with. I\u0026rsquo;ve used this method a lot over the years, and they say you can\u0026rsquo;t teach an old dog new tricks\u0026hellip;but I was surprised nonetheless to discover that we can also use this method to return every control present on a form instead. Let\u0026rsquo;s look at an example to see how this behaviour works and address a specific requirement in the process. To begin with, assume we have the following JavaScript function:\nif (typeof (JJG) === \u0026#39;undefined\u0026#39;) {var JJG = {__namespace: true};} JJG.BlogSample = { disableFormControlsIfNotOwner: function (executionContext) { \u0026#39;use strict\u0026#39;; var formContext = executionContext.getFormContext(); //Get form context var owner = formContext.getAttribute(\u0026#39;ownerid\u0026#39;).getValue(); var currentUser = Xrm.Utility.getGlobalContext().userSettings.userId; var formControls = formContext.getControl(); //No argument returns all controls on the form if (owner[0].id !== currentUser) { formControls.forEach(control =\u0026gt; { control.setDisabled(true); }); } else { formControls.forEach(control =\u0026gt; { control.setDisabled(false); }); } } } The code is straightforward, but to clearly explain what it\u0026rsquo;s doing, we want to disable all controls on the form in question if the user viewing the form is not the owner of the Dataverse row in question. Rather than having to call the same setDisabled method manually, with the appropriate control names specified, we can instead supply no argument to the getControl method to return an array object that looks something like this:\nFrom there, and as the function demonstrates, we can iterate through the list of controls returned and execute the specific action we want; in this case, disable each one and make read-only. We can then verify that things work as expected by calling this function as part of the OnLoad event on an Account form:\nThe fact that we can use the getControl method in this manner is handy, but I would caution against using it in the way described in your post too readily. For example, suppose you know that your logic only needs to target a handful of specific controls on the form. In that case, it will be far better (and faster, from an execution standpoint) to grab these specifically instead. Evaluate the needs of your scenario and, where possible, satisfy your requirement by using the getControl method with an argument provided. Notwithstanding this qualification, there will be scenarios where leveraging the method without an argument will be necessary to ensure you can most effectively apply your required logic.\n","date":"2021-08-22T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/returning-all-form-controls-via-javascript-model-driven-power-app/","title":"Returning All Form Controls via JavaScript (Model-Driven Power App)"},{"content":"Welcome to my second post in my series focused on providing a set of revision notes for the PL-600: Microsoft Power Platform Solution Architect exam. Last time, we kicked off the series by looking at some of the concepts and considerations around planning for our Power Platform solution. We are now going to continue our focus on the first area of the exam, Perform solution envisioning and requirement analyses, by looking at the following two topics:\nIdentify organization information and metrics\nidentify desired high-level organizational business processes identify business process automation opportunities assess an organization\u0026rsquo;s risk factors review key success criteria Identify existing solutions and systems\nevaluate an organization\u0026rsquo;s enterprise architecture identify data sources needed for a solution define use cases and quality standards for existing data identify and document an organization\u0026rsquo;s business processes Before we start a Power Platform or any IT project, it will be necessary to understand many factors relating to the organisation, which will have a significant impact on the approach, timescales, and shape of our implementation. A sufficient investment into this will pay dividends in the long term, ensure we can deploy our solution successfully and deliver value in the years ahead. Let\u0026rsquo;s dive into these areas to guide you on how we can best succeed in this endeavour and the types of things we need to understand to do well in the exam itself.\nThe aim of this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well in this exam. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nIdentifying and Capturing Business Processes Organisations run on repeatable processes. It\u0026rsquo;s how we ensure things run properly and that our customers are provided with a consistent experience each time. A solution architect will be expected to understand the business processes we want to map across into the Power Platform. For example, we could have a manual data entry process involving a paper-based system that we wish to convert into a model-driven Power App / Dataverse solution. But, to do this, we need to have a good grasp of how this process works. So what\u0026rsquo;s the best way of addressing this knowledge gap? One of the first things we should do is see the process in action. We need to identify individuals in the organisation who know this process like the back of their hands and, consequently, carry it out very frequently. These are the most valuable people in our project; they bring an incredible depth of knowledge and experience relating to their chosen area. Typically, as well, they will have a substantial investment towards ensuring it is done correctly. Work collaboratively with these people, ask as many questions as possible (remember, there\u0026rsquo;s no such thing as a stupid question) and start to demonstrate the benefits that the new Power Platform solution will bring into the equation. The level of their interest and engagement towards your efforts will make or break your project, so you must get them on your side early on.\nOnce we have a good grasp of how the process works, we need to document them somehow. Arguably (and as you might expect, given we\u0026rsquo;re all about Microsoft here on the blog 😉), our most effective tool to capture down our business processes is Microsoft Visio. Within this, we can typically use a Basic Flowchart or a Cross-Functional Flowchart to map out each stage of a process, end-to-end. The latter is beneficial if our process covers multiple areas of the organisation, as we can very easily incorporate new swimlanes to indicate other teams, departments or divisions within our organisation. Depending on the complexity of the overall process, it may be prudent to create several different types of documents with an overarching process diagram that provides clear links to the lower-level processes. The (very) basic example below gives your a flavour of how this may look:\nThe Lead Generation Process and Sales Approval Process steps are sub-processes, which we can tell by their appearance. Therefore, we would want to have a separate process document that provides the lower-level detail on what occurs at these stages. What\u0026rsquo;s also valuable about our flowcharts is that we can indicate steps that result in a specific output occurring. For example, in this diagram, we can demonstrate when a physical document is generated via the shape for the Generate Sales Order step. As solution architects, we should understand deeply the organisation we\u0026rsquo;re working with. On occasions, we\u0026rsquo;ll have some input when compiling together these types of diagrams - either for an existing process we plan to map out into the Power Platform or for the new envisioned process after we deploy our solution. Therefore, familiarity with all of this and the various shapes available as part of our flow charts will be essential.\nThe Importance of Business Automation As stated already, the majority of processes within an organisation will be repeatable. As such, these will be ripe areas of attention from an automation standpoint. Focusing our attention here can drive numerous benefits for the organisation, including:\nImproved Efficiency: An automated process will typically become faster to execute, thereby increasing our agility as an organisation. Time or Cost Saving: Manual process may require hours of actual human time to complete each week. Through automation, we can remove burdens on our time and (hopefully) free up our colleagues to do something more useful instead. We can also reduce costs due to the decreased risk of errors automation brings into the equation. With the best will in the world, we are all human and make mistaeks on occasion. 😇 These can sometimes cause a high financial or reputational cost if they occur. Greater Assurance: An automated process can assure our stakeholders in the organisation, as they then know it will always occur and will not be subject to unexpected factors impacting it, such as a colleague\u0026rsquo;s illness. Remember, though, that it will be impossible to automate an entire process fully; indeed, aiming for this and putting blind faith in the machine to do everything could be somewhat foolish. Because you never know when something like this may happen\u0026hellip;\nTherefore, we should ensure appropriate human touchpoints are included, where sensible and relevant, and accept that our automated processes may require a degree of ongoing observation once implemented.\nCommon Risk Factors for Power Platform / IT Projects All projects, and especially those in the IT world, involve a degree of risk. And, contrary to one organisation I once dealt with who stated that all their IT projects must have \u0026ldquo;zero risks\u0026rdquo;, we must accept and account for the potential effect of a risk occurring during any stage of our IT project. Risks can take many different forms and can often manifest during the most unlikely timeframes within a project. Some examples include:\nCh-ch-ch-ch-changes: We change our minds all the time. Instead of being good and having a salad for tea, we ring up the takeaway and order a huge pizza instead 😁. Organisations are much the same and can often be borderline maddening to work alongside when unexpected changes land in our project, which may scupper our plans to go live with our solution. The project team should have documented and communicated change control procedures to help mitigate against this. We can also lessen the impact of a potential change if you work to Agile principles and reduce our horizon for delivering our product accordingly. User Adoption: Our solution will only be effective if it becomes woven into the organisation\u0026rsquo;s fabric, and this is often the most overlooked and challenging element of any project. Collaboration, involvement and communication are essential with our stakeholders to get this right. Communication: The most high-performing project teams are the ones where communication flows freely. If contact is lacking, restrained or hampered due to conflicts on the team, our project will invariably suffer. Budget: The party can keep rolling until the check lands. Projects are much the same, and we will often have a fixed budget for our project that becomes impossible for us to exceed. Compromise and, where achievable and sensible, shortcuts may need to result to ensure we can still deliver what we hoped from the start. Based on my own experience, these give you a flavour of the most common type of risks that can surface through the project. Your mileage may vary, and, therefore, you must look to document these early on. If your project is working towards PRINCE2 principles, you could put together a risk register to provide a clear history and status of each identified risk. Keep in mind as well (and this is particularly relevant for PRINCE2 - maybe I should blog on this too 🤪) that risks could be positive or negative in terms of their effect. Some risks we may want to allow to continue if the perceived benefit outweighs the cost. A solution architect\u0026rsquo;s role is to leverage their domain expertise in articulating or identifying risks as they emerge.\nHow to Measure Success? We can measure success in multiple ways, and typically, this will differ based on the organisation we are working with. It may be that just going live with our solution is considered our single and only success criteria. I would, respectfully, suggest that this takes a very short-sighted view. Ideally, we should attempt to identify and implement several more success criteria that are ideally measurable to some degree. For example, we could have a success criterion based on the volume of feedback we receive or the number of high-priority \u0026ldquo;snagging\u0026rdquo; items received during our go-live period (where less is considered better). Ideally, the entire project team should sit down and commit to at least half a dozen methods to determine our project\u0026rsquo;s success following the go-live. The solution architect would play a role here in encouraging this to occur and suggesting relevant criteria based on what\u0026rsquo;s happened during previous projects.\nEvaluating the Landscape: How Does Your Power Platform Solution Fit In? Depending on the size, age and industry type of the organisation we are working with, our Power Platform project could be classed as either a \u0026ldquo;greenfield\u0026rdquo; or a \u0026ldquo;brownfield\u0026rdquo; project. These terms are borrowed from construction and indicate the state of the environment we plan to work within. Greenfield projects will have little (or no) existing constraints or environments to contend with. In contrast, our brownfield projects will need to factor in and, very often, co-mingle alongside an existing solution built out. As solution architects, we\u0026rsquo;ll need to do the appropriate analysis to determine what type of environment we are moving into and provide a definitive answer to the question above at the earliest possible opportunity. As part of this, having a good general awareness of Microsoft 365 or Microsoft Azure will be helpful and, even on occasion, older legacy environments involving Windows Server and all manner of tech you secretly prayed you\u0026rsquo;d never see the sight of again.\nData Sources, the Power Platform and Opportunity Leveraging Given that, within the Power Platform, we will be building a business application solution of some description, understanding and, where possible, leveraging some of the various data connectivity options available to us will be essential. Broadly speaking, solution architects will need to consider and account for the following data sources in projects:\nMicrosoft Dataverse: For situations where the organisation has no formal database or system in place already, Dataverse becomes our first preference option to consider. This will also be true if we anticipate consuming one or several different Dynamics 365 Customer Engagement applications, which may already exist. Flat-File: This is not a \u0026ldquo;source\u0026rdquo; in the sense that we would connect a Power App up to an Excel Spreadsheet (try it if you dare\u0026hellip; 😨), but we will need to consider the amount of raw data that we potentially need to bring into our Power Platform solution and, as part of this, how we can use them alongside some of the other solutions listed here or how we can incorporate them as part of tools such as dataflows. Relational Data Sources: These could be anything from SQL Server to more complex sources such as Azure Synapse Analytics (AKA Azure SQL Data Warehouse). Typically, data stored within these environments will be well-formed and, therefore, require little additional intervention on our part from a modelling standpoint. Third-Party Systems: Often, we will not be in a perfect, 100% Microsoft world during our Power Platform projects, meaning we have to move out of our comfort zone a little bit. 😀 The precise mechanisms for incorporating these data sources will vary from project to project. Still, our typical trajectory from a consideration standpoint would be to leverage an existing connector first. If this is lacking, consider instead building a custom connector of some kind that will allow us to interface with our chosen third-party system. While analysing the organisation\u0026rsquo;s current environment, the solution architect should carefully observe and identify opportunities to leverage a suitable connector, which could satisfy an integration challenge or speed the progress of our project by allowing us to leverage investments made into existing technology. Alongside this, tools such as the on-premises gateway will be invaluable tools in our arsenal, and we should exploit opportunities to actively put these kinds of solutions forward.\nDetail is essential, particularly at the early stages of a Power Platform project. Therefore, prompt and due attention to everything we\u0026rsquo;ve discussed in today\u0026rsquo;s post will be essential if you plan to get to the finish line successfully. Next time, we\u0026rsquo;ll evaluate how we can capture requirements and perform a proper fit/gap analysis for our project.\n","date":"2021-08-15T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-evaluating-an-organisation-metrics-and-existing-systems/","title":"Exam PL-600 Revision Notes: Evaluating an Organisation, Metrics and Existing Systems"},{"content":"Welcome to the first post in my new PL-600 exam blog and video series! I had some great feedback from the recent series I did targeting the PL-400 Power Platform Developer exam, with lots of people asking me which exam I would do next. So I thought I\u0026rsquo;d canvas the community to see which one they were most keen for me to look at next:\nThanks to everyone who reached out to me about my recent blog/video series on the PL-400 #PowerPlatform exam. It was such a blast that I\u0026#39;m already planning the next series! Let me know what YOU think should be covered by completing the form below:https://t.co/lgOpc2khFI pic.twitter.com/mVGnuWivxJ\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) July 6, 2021 The results of this were overwhelmingly decisive\u0026hellip;\nSo let\u0026rsquo;s kick off then by looking at Exam PL-600: Microsoft Power Platform Solution Architect. Passing this exam is a prerequisite to obtaining the Microsoft Certified: Power Platform Solution Architect Expert certification. PL-600 is the highest-level exam and certification that we can earn today within the Business Applications space. Microsoft expects candidates to demonstrate an excellent and broad knowledge of implementing an effective Microsoft Business Applications solution. In addition, candidates are expected to know and have experience leading adoption in the capacity of a solutions architect. We\u0026rsquo;ll dive into what this means as we get into the series proper, but, in short, I cannot understate the level of expertise and leadership required to adopt this role successfully. This series will aim to cover the essential topics you need to learn, and, where appropriate, I\u0026rsquo;ll provide some video-based content to help explain subjects and offer an additional revision tool you can refer back to.\nAs with all other exams, we need to evaluate a list of Skills Measured and demonstrate sufficient competency within these areas during the exam itself. The first area is titled Perform solution envisioning and requirement analyses and has a 35-40% weighting; therefore, we should expect to see plenty of questions come up that assess us in this area. The focus for today\u0026rsquo;s post will be to look at the first section within this group, Initiate solution planning, which covers the following topics:\nInitiate solution planning\nevaluate business requirements identify Microsoft Power Platform solution components identify other components including existing apps, AppSource apps, third-party components, and components from independent software vendors (ISV) identify and estimate migration effort Solution architects will play a leading role in planning out any adoption of the Power Platform within an organisation, and all of these are important considerations that we have to make. As part of this post, we\u0026rsquo;ll provide details on the concepts you\u0026rsquo;ll need to grasp to ensure you are successful in this area and prep you accordingly for the exam.\nOne final thing before we dive in\u0026hellip;the aim with this post, and the entire series, is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well in this exam. And, given the nature of this exam, it\u0026rsquo;s expected that you already have the necessary skills as a Power Platform Developer or Functional Consultant, with the certification to match.\nEvaluating Power Platform Components If you\u0026rsquo;re reading this and planning to sit the exam, there is a general expectation that you already are very familiar with the Power Platform and what it can do. But it\u0026rsquo;s always good to get a quick refresher and validate our understanding. 😉\nThe Power Platform is Microsoft\u0026rsquo;s low-code, rapid business application development platform, which can help inspire organisations to do more with less, and often forego the need to develop a new software solution from scratch. Within the Power Platform, we have several different independent yet closely related products that we can leverage:\nPower Apps: These come in two flavours. Model-driven apps are designed for desktop-based scenarios, where your application needs to sit within the confines of a strict data model. In this respect, you may often hear these types of apps referred to as data-driven applications. If you\u0026rsquo;ve come from a Dynamics CRM / Dynamics 365 background, then you may recognise a lot of the functionality available within model-driven apps. In comparison, Canvas apps are geared towards mobile-first scenarios, providing app developers with a high degree of freedom in designing their apps and deploying them to a wide variety of different devices or alongside other applications within the Power Platform. Canvas apps also have the benefit of being interoperable with a wide variety of data sources. Whether you wish to connect to an on-premise SQL Server instance, other Microsoft solutions such as SharePoint or third-party apps, such as Twitter, connectors are available to perform common Create, Read, Update and Delete (CRUD) operations and more. Power BI: A next-generation Business Intelligence (BI) tool, Power BI provides impressive data modelling, visualisation, and deployment capabilities that enable organisations to understand data from their various business systems better. Despite having its own set of tools and languages, traditional Excel power users should have little difficulty getting to grips with Power BI, thereby allowing them to migrate existing Excel-based reports across with ease. Power Automate: As a tool designed to automate various business processes, Power Automate flows can trigger specific activities based on events from almost any application system. It is a modern and flexible tool that you can use to address various integration requirements. Power Virtual Agents: Many of us will be familiar with the various live chat solutions we see across different websites that are often operated by one or multiple individuals and help answer commonly asked queries. Power Virtual Agents takes this a step further by allowing for an automated, always-on bot to reside within your website or Microsoft Teams site that individuals can then engage with. Developers construct a chatbot using an interactive editor and can straightforwardly incorporate external integrations without writing any code. Microsoft Dataverse (formerly known as the Common Data Service): The Dataverse provides a \u0026ldquo;no-code\u0026rdquo; environment to create tables, relationships and business logic, to name but a few of its capabilities. Within the Dataverse, Microsoft has standardised the various tables to align with The Common Data Model. This open-source initiative seeks to provide a standard definition of commonly used business data constructs, such as Account or Contact. The diagram below lazily stolen lovingly recycled from Microsoft illustrates all of these various applications and how they work together with other Microsoft services you may be familiar with:\nUnderstanding how each separate Power Platform application can work in tandem is critical when building an all-encompassing business application. The examples below provide a flavour of how these applications can work together, but the complete list would likely cover several pages:\nIncluding a Power Automate flow as part of a Dataverse solution, allowing you then deploy this out to multiple environments with ease. Being able to embed a Power BI tile or Dashboard with a personal dashboard setup in a model-driven Power App. Embedding a canvas-driven Power App into Power BI, allowing users to update data in real-time. Call a Power Automate flow from a Power Virtual Agent to return information from an on-premise Oracle database. As a solution architect, we are expected to be the resident Subject Matter Expert (SME) in the Power Platform and advise on the best mix of capabilities to leverage to meet a business requirement. Therefore, we must have a solid grasp of the above for our day-to-day role and the exam itself.\nAnd the Rest\u0026hellip;Dynamics 365, AppSource and more One of the benefits of working with the Power Platform is that we can generally turn to various \u0026ldquo;off the shelf\u0026rdquo; capabilities to help meet our particular requirements. As a solution architect, we need to have a good awareness of what\u0026rsquo;s out there and available and, as much as possible, reduce our reliance on any custom development work and effort to help meet a particular requirement. The benefits of this are clear; we can reduce the time it takes to build our solution, avoid situations where we rely on complex/unwieldy functionality and reduce \u0026ldquo;cost\u0026rdquo;. And by \u0026ldquo;cost\u0026rdquo;, I\u0026rsquo;m not just referring to a monetary value. There will always be a hidden cost, in terms of time/effort, that a bespoke solution will bring to the equation. Our role is to evaluate this, compare it against the monetary cost of bringing in a pre-built solution instead and ensure we have achieved an effective balance. In short, we should try and take a leaf out of Thanos\u0026rsquo; playbook here\u0026hellip;😏\nSo what are the things we need to be aware of here? We can broadly fit this into four categories:\nDynamics 365 Customer Engagement Apps: As we should already know, all of these applications (also called the \u0026ldquo;customer data platform\u0026rdquo; apps) are built upon and leverage the Power Platform extensively. As part of this, we have a range of applications that are designed to meet a variety of standard business requirements, ranging from customer service to managing complex projects. All of these applications rely on the Microsoft Dataverse as the backend data source, meaning it\u0026rsquo;s incredibly straightforward to factor them in as part of your overall solution. We will look at all of these apps in detail later on in the series. AppSource: Within the Microsoft and partner eco-system, there are various additional solutions for organisations to explore and leverage alongside their Power Platform solution. AppSource will be our primary destination to find and install these into our environment(s). The added benefit is that all of these solutions will typically contain a free trial of some description, thereby allowing us to validate their suitability for our project fully. We\u0026rsquo;ll dive into AppSource in further detail later on in the series. Third-Party Solutions: The definition of this would appear to be somewhat unclear, but I would understand this to be any solution available that may sit external to the Power Platform or one that is not tied to any financial cost to obtain. For example, it could be that you need to include a non-Microsoft application or leverage a Power Apps Component Framework (PCF) control available as part of the PCF Gallery to meet your overall requirements. The exact solutions used will be particular to each project. ISV Solutions: Typically, most committed Independent Solution Vendors (ISV\u0026rsquo;s) will have their appropriate offerings available on AppSource. But there may be occasions where this is not practical due to their solution\u0026rsquo;s complexity, size, or deployment nature. It\u0026rsquo;s impossible to cover or, indeed, identify (unless any ISV wants to pay me 😁) where to find and locate these. Still, there will often be an occasion to engage one or several ISV\u0026rsquo;s to leverage a solution they have built. The typical evaluation trajectory as Solution Architect will follow the list above (i.e. start with Dynamics 365 first and ISV solutions last). And, provided that we have gone through this and eliminated all options, we can then advise building a bespoke solution to meet our requirements.\nMapping Business Requirements into the Power Platform Although we won\u0026rsquo;t be responsible for capturing these, the solution architect will be relied upon to evaluate a set of requirements and be expected to suggest a solution using their SME domain knowledge. The challenge emerges in sometimes choosing the \u0026ldquo;right\u0026rdquo; solution. We can typically solve the same kind of problem using multiple mechanisms in the Power Platform, but we often need to be guided towards preferring a functional and \u0026ldquo;out of the box\u0026rdquo; solution as our first preference. An excellent example of this could be using a Power Automate flow to contact an external API instead of a Microsoft Dataverse plug-in. Both of these options are viable from an implementation and workability standpoint, but it\u0026rsquo;d be best to choose a Power Automate flow in this situation. The primary reason? It reduces our reliance on custom code and will make our solution easier to maintain once it goes live. This represents one of the core elements we, as solution architects, must always be conscious of when recommending a solution to the business. To provide a more comprehensive flavour, the table below includes some example business requirements and an indicated approach on how we could map these across effectively into the Power Platform:\nBusiness Requirement Solution As a member of the support team, I need to be able to sell a support agreement to customers that didn\u0026rsquo;t buy one with their original purchase so I can get their support request assigned to be resolved. Support agreements and integration alongside existing customer bases are core functionality as part of the Dynamics 365 Customer Service application. Therefore, this represents the optimal route to meeting the requirement. As a C-level executive, I need to see real-time metrics on the support team\u0026rsquo;s performance, so I know what\u0026rsquo;s going on. This requirement is interesting, as there are two potential approaches here - either Create a dashboard within a model-driven app or Build a Power BI Dashboard and leverage TDS/DirectQuery capability in the Microsoft Dataverse connector. Your ultimate decision here will likely be based on what licenses the organisation has in place and whether or not the support team plans to use, or is using, a model-driven app. Requirements like this are typical in terms of lacking core detail, so our job at this stage may well be to interrogate further.\nAs a support staff, I should be able to see a list of open help requests so I can pick one to work on for which I am qualified. Setting up a custom view within Microsoft Dataverse will then allow us to present this information as part of a model-driven Power App. As a customer, I don\u0026rsquo;t want to wait on hold, so I should be able to open a new request from Contoso\u0026rsquo;s website using an interactive chat. The need for interactive chat functionality indicates a solution leveraging Power Virtual Agents, with Dynamics 365 Omnichannel for Customer Service included. As a field technician, I should be able to view my next scheduled customer visit and receive details of the problem and directions if needed so that I can resolve their bed problem. The functionality described here covers most, if not all, of the core capabilities within the Dynamics 365 Field Service application, which represents our most optimal method for meeting the requirement. As we can see from the above, requirements are mapped as much as possible to an existing, available application within Dynamics 365. As discussed earlier, and for exam purposes, we need to be aware and leverage this functionality where feasible as part of our projects.\nIndividual requirements for a Power Platform project will typically vary, and, as a solution architect, we should expect the unexpected. However, there will be situations where a degree of commonality emerges. Therefore, you can be confident when suggesting a potential approach; if it worked well last time, chances are it will do again.\nEstimating Migration Effort In pretty much all cases, organisations will be looking to migrate into the Power Platform from some other kind of system. Whether this is a manual-based process, an existing, on-premise system, a solution from another vendor\u0026hellip;the list is endless. Each new project that a solution architect will be involved in will throw up its own set of unique challenges and circumstances that can impact our migration and the assumed effort. To help us make the appropriate determinations here, here are some suggested questions that we can ask the organisation:\nHow normalised are the datasets that need to be migrated across? Data that already maps well across into Dataverse tables will present little challenge from an import standpoint. In contrast, other datasets or an incomplete process will require additional consideration and time to fit within the Power Platform. What is the size of the organisation in question? As a good rule of thumb, smaller organisations will be a lot easier to work with and, therefore, any estimates can reflect accordingly. Is the organisation already using the Power Platform or equivalent cloud technologies, such as Microsoft 365 or Microsoft Azure? The organisation\u0026rsquo;s maturity in working in a cloud-first world can impact your project\u0026rsquo;s ease or difficulty. A relatively mature organisation should be prepared to introduce technologies such as the Power Platform with less overall fuss. What other system(s) need to integrate alongside the Power Platform solution? This will always present the most challenging aspect of any project and can significantly increase the effort involved as part of your migration. What dependencies does our project have on other teams, departments or external organisations? Projects split across a wide net can often suffer from frequent communication or collaboration issues. If the right person/team is not available at the right time, it could even severely impact your plans to go live. The solution architect should have a good grasp of all core individuals/teams required and support efforts to align each one at the right time. Answering these and similar questions will allow a solution architect to understand how costly the migration will be in terms of resource and effort. From an estimation standpoint, you should also plan to add any appropriate buffers into anything you present to the business; things can, and will regularly, go wrong, so you want to account for that and avoid any awkward conversations down the road.\nIf you fail to plan, you plan to fail. This sentiment applies so well to the Power Platform, so understanding the topics covered here will be essential as you start to design your solution. Next time in the series, we\u0026rsquo;ll look at how we can evaluate an organisation\u0026rsquo;s existing systems, information, and core metrics to ensure we plan for success as a solution architect.\n","date":"2021-08-08T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-600-revision-notes-initiating-planning-for-your-power-platform-solution/","title":"Exam PL-600 Revision Notes: Initiating Planning for your Power Platform Solution"},{"content":"Before we begin, I should highlight that most of the work in bringing this solution together came via my colleague Andrew Bennett, based on a high-level steer from yours truly. Thanks a lot, Andrew and well done for figuring all this out!\nSeveral times for me recently, a requirement has arisen involving Dynamics 365 Project Service Automation / Project Operations and, specifically, of the need to report on the individual calendar bookings for a Bookable Resource. It seems like such a straightforward ask, right? Well, no, in fact - the table that stores this information behind the scenes, calendarrule, is completely locked down from an SDK standpoint, meaning it\u0026rsquo;s impossible to perform RetrieveMultiple requests against it to extract the information we want. Instead, Microsoft has made available a dedicated Web API function, called ExpandCalendar, that does precisely what its name implies. All we need to do is pass through the details of the Bookable Resources calendar, the date ranges we wish to interrogate further, and the relevant information from within the calendarrule table will generate. Working with this function for a single Bookable Resource doesn\u0026rsquo;t present too much of a challenge. But suppose we want to use Power BI instead and apply this function against many hundreds or thousands of rows. In that case, complications can arise because the Dataverse connector(s) available to us today do not allow us to work with these types of functions straightforwardly. The purpose of this post is to demonstrate an approach that will enable us to circumnavigate this limitation and work with the ExpandCalendar function in the context of multiple Bookable Resource rows. Get your belts tightened, as this could be a bumpy ride\u0026hellip; 😅\nTo begin with, we need to jump into the Power Query Editor within our Power BI Desktop file to add a new parameter to our model. This will define the Web API URL of the Dataverse environment we\u0026rsquo;re connecting to:\nNext, because for our scenario, we want to expand out the details of each Bookable Resources calendar as part of an existing table, we must then define a custom function that accepts the same inputs as the ExpandCalendar request. Create a new blank query and then copy \u0026amp; paste in the following snippet for the function:\nlet Source = (calendarid as text, start as date, end as date) =\u0026gt; let relativePath = \u0026#34;calendars(\u0026#34; \u0026amp; calendarid \u0026amp;\u0026#34;)/Microsoft.Dynamics.CRM.ExpandCalendar(Start=@start,End=@end)\u0026#34;, queryStart = Record.AddField([], \u0026#34;@start\u0026#34;, Date.ToText(start, \u0026#34;yyyy-MM-dd\u0026#34;)), query = Record.AddField(queryStart, \u0026#34;@end\u0026#34;, Date.ToText(end, \u0026#34;yyyy-MM-dd\u0026#34;)), raw = Web.Contents(DataverseAPIAddress, [RelativePath=relativePath,Query=query]), json = Json.Document(raw), result = json[result] in result in Source You\u0026rsquo;ll know you\u0026rsquo;ve added it correctly if you see something like this after hitting the Done option:\nNow, bring the bookableresource table into your model using the standard Microsoft Dataverse connector. In this example, we\u0026rsquo;ll restrict the view down so that only the following columns are visible; but feel free to keep any additional columns you need:\nbookableresourceid calendarid name resourcetype You can use the below snippet to speed things along here:\nlet Source = Cds.Entities(DataverseAPIAddress, null), entities = Source{[Group=\u0026#34;entities\u0026#34;]}[Data], bookableresources = entities{[EntitySetName=\u0026#34;bookableresources\u0026#34;]}[Data], #\u0026#34;Removed Other Columns\u0026#34; = Table.SelectColumns(bookableresources,{\u0026#34;bookableresourceid\u0026#34;, \u0026#34;calendarid\u0026#34;, \u0026#34;name\u0026#34;, \u0026#34;resourcetype\u0026#34;}) in #\u0026#34;Removed Other Columns\u0026#34; Finally, we now want to expand the bookableresource query and add a few manual steps to execute the ExpandCalendar function call. In this scenario, we need to return calendar bookings within a 12 month period (i.e. from the start to the end of the current year). So we can add on a few steps to our query to figure this out and then put together the correct dates that we want to pass through. Here\u0026rsquo;s the updated bookableresource query snippet to use:\nlet Source = Cds.Entities(DataverseAPIAddress, null), entities = Source{[Group=\u0026#34;entities\u0026#34;]}[Data], bookableresources = entities{[EntitySetName=\u0026#34;bookableresources\u0026#34;]}[Data], #\u0026#34;Removed Other Columns\u0026#34; = Table.SelectColumns(bookableresources,{\u0026#34;bookableresourceid\u0026#34;, \u0026#34;calendarid\u0026#34;, \u0026#34;name\u0026#34;, \u0026#34;resourcetype\u0026#34;}), currentDateTime = DateTime.LocalNow(), currentDate = DateTime.Date(currentDateTime), startOfCurrentYear = Date.StartOfYear(currentDate), startOfLastYear = Date.AddYears(startOfCurrentYear, -1), endOfNextYear = Date.AddYears(startOfCurrentYear, 2), result = Table.AddColumn(#\u0026#34;Removed Other Columns\u0026#34;, \u0026#34;Calendar\u0026#34;, each Table.FromRecords(ExpandCalendarRequest([calendarid],startOfLastYear, endOfNextYear))) in result Note you may, at this stage, get a privacy level warning - in the dialog that appears, set this to the same as your other data sources, and then the data should load fine:\nAs a table object is returned for each row, we then need to define which columns we are interested in viewing. In this example, we want to see the start / end dates and determine the type of appointment we are looking at:\nFrom there, you can then proceed to build out your report and then deploy it out to the online service, if needed.\nI\u0026rsquo;m still a little baffled why the calendarrule table is locked down the way it is. Still, with the ExpandCalendar function, I\u0026rsquo;m thankful that we have a route to interrogating the platform further for what is a pretty vital information point that most organisations are keenly interested in. I hope this post helps you work with this function using Power BI and avoid a situation where you have to consider alternate, bespoke routes to meet a similar requirement.\n","date":"2021-08-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/working-with-the-expandcalendar-function-in-power-bi-desktop-microsoft-dataverse-dynamics-365/","title":"Working with the ExpandCalendar Function in Power BI Desktop (Microsoft Dataverse / Dynamics 365)"},{"content":"Importing solutions into a Microsoft Dataverse environment can often cause many frustrations when working with the Power Platform, especially when they error for reasons that are not immediately discernable. A great example of this came up for us recently, as part of a managed solution deployment that involved customisations to the Dynamics 365 Customer Service application:\nThis solution cannot be uninstalled because the \u0026lsquo;LocalizedLabel\u0026rsquo; with id \u0026lsquo;35ec5cd1-4492-41c7-8e9f-f679206db6a8 \u0026amp; 8d98231d-bae3-eb11-bacb-000d3a2b12fc(Workflow)\u0026rsquo; is required by the \u0026lsquo;SolutionA_Upgrade\u0026rsquo; solution. Uninstall the SolutionA_Upgrade solution and try again.\nIn effect, the install of the solution had been completed successfully (indicated by the fact that we had two solutions present in the environment - SolutionA and SolutionA_Upgrade). The actual upgrade step was where things were going wrong. As part of the update, we looked to migrate across legacy record creation rules to the new unified interface to ensure that we were no longer reliant on the classic interface. To summarise, we created the new rules, added them to our solution, and removed the old rules from the solution completely. It turns out that, upon closer inspection, this step was not sufficient; hence the errors we were receiving above. After raising a support ticket, Microsoft advised us to complete the following steps to deploy things out successfully:\nNavigate to the Default Solution within the target environment. Locate the legacy Record Creation rule(s) to be removed and delete manually. Re-attempt the solution upgrade After doing this, our solution upgrade completed successfully 😀\nSolution import errors can often eat up a lot of our time. In most cases, the error messages can generally point us in the right direction of where to look. Still, there will be occasions - like this one - where it will be simply quicker to escalate it through to Microsoft for a resolution. They can typically access more detailed logs behind the scenes and, as we were (pleasantly) surprised in this instance, get to the root of the issue and resolve it quickly. Thanks, Microsoft support!\n","date":"2021-07-25T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/resolving-localizedlabel-solution-install-error-microsoft-dataverse-dynamics-365-customer-service/","title":"Resolving LocalizedLabel Solution Install Error (Microsoft Dataverse / Dynamics 365 Customer Service)"},{"content":"I\u0026rsquo;ve blogged previously regarding environment variables, mainly in the context of Power Automate flows and canvas Power Apps. Suffice to say, they are a handy tool to use whenever you introduce an application lifecycle management (ALM) process into your Power Platform project. This is because they provide a straightforward mechanism to alter the behaviour of your applications as you push them out into downstream environments by parameterising aspects of your configuration. For example, you could adjust the destination email address for all Power Automate flows so that notifications will always land where they need to. It\u0026rsquo;s worth spending some time studying up on them and to consider how you can start to leverage them within your projects.\nAs great as I think they are, I came across a recent issue with them that I thought I\u0026rsquo;d touch upon in today\u0026rsquo;s blog post. Let\u0026rsquo;s assume a scenario where we want to leverage an environment variable value as part of a model-driven app client-side script using JavaScript. There is no dedicated function that we can call currently to return an environment variable value to us. So instead, we could look to make a straightforward Retrieve request via the Dataverse Web API to get this, similar to the example we can see below:\nXrm.WebApi.retrieveRecord(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#39;87cd5b2c-b3a2-eb11-b1ac-000d3a4cd2e8\u0026#39;, \u0026#39;?$select=value\u0026#39;).then( function success(result) { var value = result.value; //TODO: Add code here to process the Environment Variable value }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ) This action is possible because all values are stored within the Environment Variable Value table, which we can freely query via multiple mechanisms within Microsoft Dataverse. And, as this example indicates, an assumption is made that the underlying Globally Unique Identifier (GUID) of this row always remains the same, regardless of which environment we are in. A pretty safe assumption, right? Right\u0026hellip;? 😅 Well, as I found out when we did a recent deployment involving a similar code snippet above, this GUID will not remain the same, even if you choose to include the current environment variable value as part of your solution. So effectively, this leaves our code snippet above wholly unusable and impractical as a solution. Instead, we need to look at an alternative approach that leverages a RetrieveMultiple request, with an adjusted OData query parameter, to return the environment variable value we want, based on the schema name of the Environment Variable. So assuming this is called jjg_myawesomeev, our code snippet would need to be updated as follows:\nXrm.WebApi.retrieveMultipleRecords(\u0026#39;environmentvariablevalue\u0026#39;, \u0026#34;?$select=value\u0026amp;$expand=EnvironmentVariableDefinitionId\u0026amp;$filter=(EnvironmentVariableDefinitionId/schemaname eq \u0026#39;jjg_myawesomeev\u0026#39;)\u0026#34;).then( function success(result) { var value = result.entities[0].value; //TODO: Add code here to process the Environment Variable value }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;A problem occurred while retrieving an Environment Variable value. Please contact support.\u0026#39;}); } ) As the schema name of our Environment Variables must always be unique, this is the next best thing available to use to ensure we can retrieve a single value. It is a little bit frustrating, perhaps, but when you understand how Microsoft has set up Environment Variables behind the scenes, it makes sense. Here\u0026rsquo;s hoping that, in the future, we get a more reliable method made available to us that will handle all of this for us.\n","date":"2021-07-18T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/retrieving-environment-variable-values-via-javascript-microsoft-dataverse-power-apps/","title":"Retrieving Environment Variable Values via JavaScript (Microsoft Dataverse / Power Apps)"},{"content":"Azure Key Vault and Azure App Service / Azure Function Apps are like cherries and cream - delicious individually, but so much better when you work with them together. The first of these services provides a secure mechanism for managing our passwords, secrets, encryption keys, and certificates within the cloud as part of a fully managed service. In contrast, the latter is our go-to tool for any software application or code that we want to run in the cloud where, again, we don\u0026rsquo;t want to concern ourselves with spinning up and managing any physical infrastructure. A common mechanism through which developers may use the services together is by having, for example, an application setting that links to the URL of a Key Vault secret. The Azure Resource Manager (RM) template snippet below demonstrates how we could do this. Because we\u0026rsquo;re leveraging Key Vault access policies and a system-assigned managed identity on our Function App, we can also ensure that only our Function App can see and work with the secret that we\u0026rsquo;ve defined:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;kv_MySecretValue\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;securestring\u0026#34;, \u0026#34;metadata\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;Value for the MySecret Key Vault Secret\u0026#34; } } }, \u0026#34;Variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-08-01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;functionapp\u0026#34;, \u0026#34;Identity\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;SystemAssigned\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;siteConfig\u0026#34;: { \u0026#34;appSettings\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;AzureWebJobsStorage\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[concat(\u0026#39;DefaultEndpointsProtocol=https;AccountName=mystorageaccount;EndpointSuffix=\u0026#39;, environment().suffixes.storage, \u0026#39;;AccountKey=\u0026#39;,listKeys(resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, \u0026#39;mystorageaccount\u0026#39;), \u0026#39;2019-06-01\u0026#39;).keys[0].value)]\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;AzureWebJobsDashboard\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[concat(\u0026#39;DefaultEndpointsProtocol=https;AccountName=mystorageaccount;EndpointSuffix=\u0026#39;, environment().suffixes.storage, \u0026#39;;AccountKey=\u0026#39;,listKeys(resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, \u0026#39;mystorageaccount\u0026#39;), \u0026#39;2019-06-01\u0026#39;).keys[0].value)]\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;FUNCTIONS_EXTENSION_VERSION\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;~1\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;WEBSITE_NODE_DEFAULT_VERSION\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;~10\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;FUNCTIONS_WORKER_RUNTIME\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;dotnet\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;MyKeyVaultSecretURL\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;https://mykeyvault.vault.azure.net/secrets/MySecret?api-version=2016-10-01\u0026#34; } ], \u0026#34;httpsOnly\u0026#34;: true } }, \u0026#34;resources\u0026#34;: [], \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, \u0026#39;mystorageaccount\u0026#39;)]\u0026#34; ] }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Storage/storageAccounts\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;mystorageaccount\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-06-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;StorageV2\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Standard_LRS\u0026#34; } }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.KeyVault/vaults\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2020-04-01-preview\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyKeyVault\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;sku\u0026#34;: { \u0026#34;family\u0026#34;: \u0026#34;A\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;standard\u0026#34; }, \u0026#34;tenantId\u0026#34;: \u0026#34;[subscription().tenantId]\u0026#34;, \u0026#34;accessPolicies\u0026#34;: [ { \u0026#34;tenantId\u0026#34;: \u0026#34;[subscription().tenantId]\u0026#34;, \u0026#34;objectId\u0026#34;: \u0026#34;[reference(resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;), \u0026#39;2019-08-01\u0026#39;, \u0026#39;full\u0026#39;).identity.principalId]\u0026#34;, \u0026#34;permissions\u0026#34;: { \u0026#34;keys\u0026#34;: [], \u0026#34;secrets\u0026#34;: [ \u0026#34;Get\u0026#34; ], \u0026#34;certificates\u0026#34;: [], \u0026#34;storage\u0026#34;: [] } } ], \u0026#34;enabledForDeployment\u0026#34;: false, \u0026#34;enabledForDiskEncryption\u0026#34;: false, \u0026#34;enabledForTemplateDeployment\u0026#34;: false, \u0026#34;enableSoftDelete\u0026#34;: true, \u0026#34;softDeleteRetentionInDays\u0026#34;: 90, \u0026#34;enableRbacAuthorization\u0026#34;: false, \u0026#34;vaultUri\u0026#34;: \u0026#34;https://mykeyvault.vault.azure.net/\u0026#34; }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.KeyVault/vaults/secrets\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2020-04-01-preview\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyKeyVault/MySecret\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.KeyVault/vaults\u0026#39;, \u0026#39;MyKeyVault\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;[parameters(\u0026#39;kv_MySecretValue\u0026#39;)]\u0026#34;, \u0026#34;contentType\u0026#34;: \u0026#34;MySecret Value\u0026#34;, \u0026#34;Attributes\u0026#34;: { \u0026#34;enabled\u0026#34;: true, \u0026#34;exp\u0026#34;: 1681858800 } } } ], \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34; ] } ], \u0026#34;outputs\u0026#34;: {} } From there, we can then reference the URL from within our code to retrieve our secret value. Pretty handy, right? But there\u0026rsquo;s a better way available to us to handle the retrieval action for us automatically. Enter stage left Key Vault References, a handy pseudo-function that we can leverage as part of any application setting to return the value of a secret we may be interested in, without any additional retrieval operation required. Using it is straightforward, and all we need to do is modify our appSetting value as indicated below:\n{ \u0026#34;name\u0026#34;: \u0026#34;MyKeyVaultSecretURL\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;@Microsoft.KeyVault(VaultName=MyKeyVault;SecretName=MySecret)\u0026#34; } Once deployed, we can verify that the reference is working and resolving back our secret value by checking it in the portal:\nLovely Jubbly! One small thing I\u0026rsquo;ve seen with them \u0026ldquo;in the field\u0026rdquo; is that if you go in and update the value of your secret (i.e. create a new version), this sometimes does not immediately reflect itself. I believe some process behind the scenes does a rotation on the App Service itself, but this will not occur straightaway after a deployment. Answers on a postcard if you know how to force this but, as a workaround, you can add in the secret version, which would look something like this:\n@Microsoft.KeyVault(VaultName=MyKeyVault;SecretName=MySecret;SecretVersion=ec96f02080254f109c51a1f14cdb1931)\nYou can easily determine the current, valid secret version to use by checking the Key Vault in question:\nAs I said to a developer colleague not long ago, Key Vault References are perhaps the most useful thing I\u0026rsquo;ve ever learned from studying for an Azure exam. 🤣🤣 Hopefully, you\u0026rsquo;ll agree that they are a nifty feature that can save us some aggro as we work with these services in tandem.\n","date":"2021-07-11T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/working-with-azure-key-vault-references-in-azure-app-service-azure-functions/","title":"Working with Azure Key Vault References in Azure App Service / Azure Functions"},{"content":"Sometimes you can have moments of anguish fun trying to figure out a particular issue involving Microsoft Dataverse solutions and why specific components may not be transporting across into other environments as expected. A great example of this came up for us recently on a project, which I thought I\u0026rsquo;d share as part of today\u0026rsquo;s blog post. Let\u0026rsquo;s assume we have a scenario that looks a little like this:\nWithin our Dataverse environment, we have two tables - in this case, One Table and Many Table: Both of these tables have an identical set of columns defined for them, a relationship between them both (1:N between the One Table and Many Table - hence the names 😉) and the appropriate column mappings defined: For this example, we\u0026rsquo;ve included the full definition of the Many Table table, but nothing for the One Table**.** For the purposes of this scenario, let\u0026rsquo;s assume that the table/columns our relationship references are within another solution, or we\u0026rsquo;re working with some unmodified, out of the box components that we don\u0026rsquo;t want to transport. With the scenario defined, we can now jump into the problem - namely, that none of the column mappings we\u0026rsquo;ve defined with our relationship will export correctly. We can verify this by checking the raw customizations.xml file, which confirms that our relationship definition (EntityRelationship) has exported correctly but has wholly excluded the mappings (EntityMaps) we\u0026rsquo;ve defined:\nThe solution is remarkably straightforward and, perhaps, obvious - add in the table and all columns defined as part of the mapping. After doing this and exporting the solution again, we can then verify that the EntityMap node now contains the mappings we expect:\nFrom there, you should then see your defined column mappings getting deployed out successfully into other environments. 😀\nProblems of this nature tend to have a straightforward answer but can sometimes be quite tricky to diagnose as they occur in the moment. Hopefully, if you\u0026rsquo;ve stumbled upon this post after coming across the same issue, you\u0026rsquo;ll know what you need to do now to get things working as they should be.\n","date":"2021-07-04T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/relationship-mappings-not-exporting-correctly-in-solution-microsoft-dataverse/","title":"Relationship Mappings Not Exporting Correctly in Solution (Microsoft Dataverse)"},{"content":"Welcome to the final post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In today\u0026rsquo;s post, I wanted to consolidate all of the content from the series into a single, concise post for ease of access. I\u0026rsquo;ll also provide some general advice and tips that I hope will come in useful for when you sit the exam.\nThis series has aimed to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nMicrosoft has split the PL-400 exam into several different areas based on the specification found here. Therefore, I have linked below to the relevant blog/video content from the series for each applicable subject.\nCreate a technical design (10-15%) Validate requirements and design technical architecture design and validate the technical architecture for a solution design authentication and authorization strategy determine whether you can meet requirements with out-of-the-box functionality determine when to use Logic Apps versus Power Automate flows determine when to use serverless computing, plug-ins, or Power Automate determine when to build a virtual entity data source provider and when to use connectors Design solution components design a data model design Power Apps reusable components design custom connectors design server-side components Describe Microsoft Power Platform extensibility points describe Power Virtual Agents extensibility points including Bot Framework skills and Power Automate flows describe Power BI extensibility points including Power BI APIs, custom visuals, and embedding Power BI apps in websites and other applications describe Power Apps portal extensibility points including CRUD APIs and custom styling describe Web Resources and their uses Blog Posts Exam PL-400 Revision Notes: Designing a Technical Architecture for the Power Platform\nExam PL-400 Revision Notes: Designing Solution Components within the Power Platform\nExam PL-400 Revision Notes: Reviewing Power Platform Extensibility Points\nConfigure Microsoft Dataverse (15-20%) Configure security to support development troubleshoot operational security issues create or update security roles and field-level security profiles configure business units and teams Implement tables and columns configure tables and table options configure columns configure relationships and types of behaviours Implement application lifecycle management (ALM) create solutions and manage solution components import and export solutions manage solution dependencies create a package for deployment automate deployments implement source control for projects including solutions and code assets Blog Posts Exam PL-400 Revision Notes: Using Solutions to implement Application Lifecycle Management (ALM) Capabilities\nExam PL-400 Revision Notes: Modelling Data using Tables, Columns \u0026amp; Relationships in Microsoft Dataverse\nExam PL-400 Revision Notes: Implementing Security within Microsoft Dataverse\nVideos PL-400 Exam Prep: Creating and Deploying a Managed Solution PL-400 Exam Prep: Working with the Solution Packager Tool PL-400 Exam Prep: Extracting and Deploying Solutions using Azure DevOps PL-400 Exam Prep: Working with Tables, Columns \u0026amp; Relationships in Microsoft Dataverse PL-400 Exam Prep: Understanding Security in Microsoft Dataverse Create and configure Power Apps (15-20%) Create model-driven apps configure a model-driven app configure forms configure columns configure visualizations configure commands and buttons Create canvas apps create and configure a canvas app implement complex formulas to manage control events and properties analyze app usage by using App Insights build reusable component libraries Manage and troubleshoot apps troubleshoot app issues by using Monitor and other browser-based debugging tools interpret results from App Checker and Solution Checker identify and resolve connector and API errors optimize app performance including pre-loading data and query delegation Blog Posts Exam PL-400 Revision Notes: Working with Model-Driven Power Apps\nExam PL-400 Revision Notes: Working with Canvas Apps\nExam PL-400 Revision Notes: Managing and Troubleshooting a Power App\nVideos PL-400 Exam Prep: Creating a Model-Driven Power App Form, Form, View, Chart \u0026amp; Dashboard PL-400 Exam Prep: Working with Canvas Power Apps Configure business process automation (5-10%) Configure Power Automate create and configure a flow configure steps to use Dataverse connector actions and triggers implement complex expressions in flow steps implement error handling troubleshoot flows by analyzing JSON responses from connectors Implement processes create and configure business process flows create and configure business rules create, manage, and interact with business process flows by using server-side and client-side code troubleshoot processes Blog Posts Exam PL-400 Revision Notes: Working with Power Automate Flows\nExam PL-400 Revision Notes: Implementing Business Process Flows and Business Rules\nVideos PL-400 Exam Prep: Business Process Automation Using Power Automate Cloud Flows PL-400 Exam Prep: Designing and Interacting with a Business Process Flow PL-400 Exam Prep: Working with Business Rules Extend the user experience (10-15%) Apply business logic using client scripting create JavaScript or TypeScript code that targets the Client API object model register an event handler create client-side scripts that target the Dataverse Web API Create a Power Apps Component Framework (PCF) component describe the PCF component lifecycle initialize a new PCF component configure a PCF component manifest implement the component interfaces package, deploy, and consume the component configure and use PCF Device, Utility, and WebAPI features test and debug PCF components by using the local test harness Create a command button function create commands design command button rules and actions edit the command bar by using the Ribbon Workbench manage dependencies between JavaScript libraries Blog Posts Exam PL-400 Revision Notes: Implementing Client-Side Scripting on Model Driven Power Apps\nExam PL-400 Revision Notes: Creating a Power Apps Component Framework (PCF) Control\nExam PL-400 Revision Notes: Working with Ribbon Command Buttons\nVideos PL-400 Exam Prep: Creating a Basic JavaScript Form Function for a Model Driven Power App Form PL-400 Exam Prep: Building \u0026amp; Deploying a Power Apps Component Framework (PCF) Control PL-400 Exam Prep: Using the Ribbon Workbench to Create a Simple Command Button Extend the platform (15-20%) Create a plug-in describe the plug-in execution pipeline design and develop a plug-in debug and troubleshoot a plug-in implement business logic by using pre-images and post-images perform operations on data by using the Organization service API optimize plug-in performance register custom assemblies by using the Plug-in Registration Tool develop a plug-in that targets a custom action message Create custom connectors create a definition for the API configure API security use policy templates to modify connector behavior at runtime expose Azure Functions as custom connectors create custom connectors for public APIs by using Postman Use platform APIs interact with data and processes by using the Dataverse Web API or the Organization Service implement API limit retry policies optimize for performance, concurrency, transactions, and batching query the Global Discovery service to discover the URL and other information for an organization perform entity metadata operations with the Web API perform authentication by using OAuth Process workloads process long-running operations by using Azure Functions configure scheduled and event-driven function triggers in Azure Functions authenticate to the Microsoft Power Platform by using managed identities Blog Posts Exam PL-400 Revision Notes: Building, Deploying \u0026amp; Debugging Plug-ins using C#\nExam PL-400 Revision Notes: Building a Power Platform Custom Connector\nExam PL-400 Revision Notes: Working with the Microsoft Dataverse Web API and Processing Workloads\nVideos PL-400 Exam Prep: Creating a Microsoft Dataverse Plug-in using Visual Studio 2019 \u0026amp; C# PL-400 Exam Prep: Deploying a Microsoft Dataverse Plug-in using the Plug-in Registration Tool PL-400 Exam Prep: Debugging a Microsoft Dataverse Plug-in using Trace Logging PL-400 Exam Prep: Debugging a Microsoft Dataverse Plug-in Using the Plug-in Registration Tool PL-400 Exam Prep: Creating a Custom Connector for an Azure Function PL-400 Exam Prep: Authenticating into the Microsoft Dataverse Web API PL-400 Exam Prep: Working with the Microsoft Dataverse Global Discovery Endpoint PL-400 Exam Prep: Performing Common Operations with the Microsoft Dataverse API PL-400 Exam Prep: Performing Batch Operations using the Microsoft Dataverse Web API Develop Integrations (5-10%) Publish and consume events publish an event by using the API publish an event by using the Plug-in Registration Tool register service endpoints including webhooks, Azure Service Bus, and Azure Event Hub implement a Dataverse listener for an Azure solution create an Azure Function that interacts with Microsoft Power Platform Implement data synchronization configure entity change tracking read entity change records by using platform APIs create and use alternate keys Blog Posts Exam PL-400 Revision Notes: Publishing and Consuming Events\nExam PL-400 Revision Notes: Implementing Data Synchronisation using Microsoft Dataverse\nVideos PL-400 Exam Prep: Posting Microsoft Dataverse Events to Azure Service Bus PL-400 Exam Prep: Registering \u0026amp; Consuming a Microsoft Dataverse Webhook from an Azure Function PL-400 Exam Prep: Working with Change Tracking in the Microsoft Dataverse Web API PL-400 Exam Prep: Working with Alternate Keys in the Microsoft Dataverse Web API General Exam Preparation Tips Hands-on preparation is essential if you wish to do well in the exam. You should ideally set up a Power Apps Developer Plan environment that you can use to experiment with the core functionality within the Power Platform. Keep abreast of the latest Microsoft Docs and Learning Path materials that are related to this exam, as the platform is continually changing all of the time. Familiarise yourself with the general format, length and expected types of questions within a Microsoft exam. If you want to get a feel for the level and type of questions to expect on the exam, you can also purchase an official practice test from Measure Up. Due to the ongoing COVID-19 situation, you will more than likely have to sit your exam using the online proctored experience. Take some time to familiarise yourself with the process involved here, and perform a system test well in advance of your exam date - the last thing you need on the day of the exam is to stress out due to a system or access issue. I hope that you\u0026rsquo;ve found this series useful. Good luck when sitting the exam, and let me know how you got on!\n","date":"2021-06-27T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-series-roundup/","title":"Exam PL-400 Revision Notes: Series Roundup"},{"content":"Welcome to the nineteenth and penultimate post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Previously, we examined what events are and how they can provide a streamlined mechanism for executing complex business logic outside of Microsoft Dataverse. In today\u0026rsquo;s post, we conclude our review of the Develop Integrations area of the exam by looking at our final exam topic - Implement data synchronisation. For this, Microsoft expects candidates to demonstrate knowledge of the following:\nImplement data synchronization\nconfigure and use entity change tracking read entity change records by using platform APIs create and use alternate keys For a long time now, Microsoft has provided tools that can perform complex or straightforward integrations involving data that resides within Microsoft Dataverse. In some situations, the out of the box capabilities offered to us can negate the need to implement solutions to track data changes or lookup table rows based on alternative identifiers. In discussing the topics listed above today, we will see how these features can achieve these objectives and more.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nChange Tracking Overview If you have any previous experience working with SQL Server, you may be familiar with a built-in capability called Change Tracking. You may infer what this involves quickly from the title. Still, to explain clearly, this is a feature you can enable within your databases/tables to more straightforwardly track when your database records are added, modified or removed. Change Tracking can be beneficial for us in several different scenarios. For example, let\u0026rsquo;s say we have a batch data synchronisation activity. It will typically be more efficient only to process any actual changes that have taken place instead of processing thousands if not millions of records each day. Using Change Tracking, we can easily append the appropriate filters to our query to get the precise set of changes we want. Also, by leveraging this \u0026ldquo;out of the box\u0026rdquo; capability, we don\u0026rsquo;t need need to spend time building out our own change tracking capability, meaning we can focus our development efforts on more critical activities.\nSo you may be wondering at this stage - why are we talking about something that\u0026rsquo;s nothing to do with the Power Platform? Well, fun fact - Microsoft Dataverse uses SQL Server behind the scenes, and the change tracking capability available to us here is based on the same one we can leverage in SQL Server. We can enable this for any table by simply ticking the appropriate option on the relevant properties page:\nWe can then query change tracking information via the Web API and the RetrieveEntityChangesRequest if we are working with a C# application. Based on the actual changes that have occurred, we will then receive back a list of new rows, updated rows (along with the columns that have changed) and any rows that users may have removed since we last checked. The critical element as part of this is either our delta token (for Web API) or version number (for the Organisation Service), which we include as part of any subsequent requests we make to our Microsoft Dataverse to detect the changes made since the last time. The platform supports multiple token/version numbers existing at any time but, keep in mind that tokens will expire after 90 days. In theory, if your integration processes changes daily or weekly, this should never be a problem; simply ensure you store the relevant portions of the delta link to continue to return only changes you need.\nDemo: Working with Change Tracking in the Microsoft Dataverse Web API To see how to enable change tracking capability and then query delta changes from Microsoft Dataverse using this, please watch the video below where I showcase the steps involved:\nAlternate Keys: What are they, and why should I use them? Typically, when integrating alongside an external system, it will contain unique record identifiers that differ from the globally unique identifier (GUID) for each Microsoft Dataverse row. It could be this identifier is an integer, string or a mixture of different attribute values (e.g. the address and name of a customer). Due to the myriad of different systems available not just through Microsoft but also through other vendors, it can be challenging to find a straightforward solution to handle this.\nEnter stage left alternate keys, a helpful feature we can enable on one or several columns on a table. By creating them, you are informing Microsoft Dataverse that the contents of this field must always be unique within the system. After an alternate key is defined, the application will create a unique index in the database, covering the specified columns. Therefore, the hidden benefit of this is that queries targeting alternate key values will always run faster. When writing this post, it was possible to use both the Power Apps maker portal and classic interface to create alternate keys; where possible, I would advise you to use the maker portal. An additional benefit with alternate keys is that the SDK supports several capabilities that developers can easily leverage alongside them. For example, it\u0026rsquo;s possible to create new records by specifying its alternate key value and execute Web API Retrieve requests using one or multiple Alternate Key values.\nKeep in mind some of the limitations concerning alternate keys:\nThe platform enforces some rules not only on the size of the alternate key (in bytes) but also on whether certain illegal Unicode characters exist in the field value. If any attribute value breaks these rules, errors will likely occur. You can only have a maximum of 10 per table. Microsoft recently increased this limit. The application relies on a system job to create the appropriate index for each newly defined alternate key. The processing time for this job can take a considerable amount of time to complete, depending on the size of your database. You can navigate into the classic interface at any time to track progress and also evaluate whether any errors have occurred. You are limited to the following data types when defining an attribute as an alternate key: Date and Time Decimal Lookup Option Set Single Line of Text Whole Number Demo: Working with Alternate Keys in the Microsoft Dataverse Web API To understand how to define and then query data from the platform using alternate keys, check out the video below:\nAnd that\u0026rsquo;s a wrap\u0026hellip; \u0026hellip;you might expect me to say at this stage. True enough, we\u0026rsquo;ve now covered all of the content within the PL-400 exam. What I wanted to do in next week\u0026rsquo;s post is perform a round-up post that collects together all of the posts/videos published during the series and share some general advice relating to the exam. Catch you again next week! 😉\n","date":"2021-06-20T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-implementing-data-synchronisation-using-microsoft-dataverse/","title":"Exam PL-400 Revision Notes: Implementing Data Synchronisation using Microsoft Dataverse"},{"content":"Welcome to the eighteenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last time around, we explored some of the capabilities offered by the Microsoft Dataverse Web API, which can be particularly useful when you are building integrations targeting this platform. This post finished our discussion on the Extend the platform area of the exam, which has a 15-20% total weighting. We now move into the final exam area, Develop Integrations and our first subject area concerning events:\nPublish and consume events\npublish an event by using the API publish an event by using the Plug-in Registration Tool register service endpoints including webhooks, Azure Service Bus, and Azure Event Hub implement a Common Data Service listener for an Azure solution create an Azure Function that interacts with Power Platform Although this area of the exam has a somewhat low weighting (5-10%) when compared with the other subjects we\u0026rsquo;ve looked at, there is still some essential things to learn here that is not only relevant to the exam itself but also for your daily travels with the platform. Events are just one way in which you can integrate your Microsoft Dataverse environment alongside Microsoft Azure. Developing modern cloud applications involving core Microsoft products invariably means you must have a general awareness of the capabilities within Azure, so this topic provides an excellent opportunity to increase your knowledge in this area. Let\u0026rsquo;s dive in now to see what events are all about and how they relate to Azure!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. It would help if you also understand working with plug-ins and the Plug-in Registration Tool, which we\u0026rsquo;ve already covered in the series.\nWhat are Events? You may be worried at this stage that events are a whole new concept that will take considerable time to understand. Fortunately, that\u0026rsquo;s not the case at all and, if you have good familiarity working with the IExecutionContext Interface from within a plug-in, you will find yourself right at home. This is because Events encapsulate all data points that we would typically work within our execution context - whether that be shared variables, output parameters, Business Unit ID or more. An example of how an event can look, when passed out as a JSON object, can be seen below:\n{ \u0026#34;BusinessUnitId\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;CorrelationId\u0026#34;: \u0026#34;dfe79039-5a08-4d0a-b4ee-9534625c8654\u0026#34;, \u0026#34;Depth\u0026#34;: 1, \u0026#34;InitiatingUserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;InitiatingUserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;InputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;Target\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;Entity:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;john@domain.com\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;jobtitle\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Manager\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;fullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;ownerid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 0 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;createdon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;/Date(1598771723000)/\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;transactioncurrencyid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;785af3f0-d976-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;transactioncurrency\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedby\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedonbehalfby\u0026#34;, \u0026#34;value\u0026#34;: null }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;lastname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;firstname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;createdby\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;yomifullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;owningbusinessunit\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;businessunit\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;/Date(1598771723000)/\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;exchangerate\u0026#34;, \u0026#34;value\u0026#34;: 1.0 } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Send\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Active\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;createdon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2020-08-30T07:15:23-00:00\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Active\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Any\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2020-08-30T07:15:23-00:00\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Morning\u0026#34; } ], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: \u0026#34;4198516\u0026#34; } } ], \u0026#34;IsExecutingOffline\u0026#34;: false, \u0026#34;IsInTransaction\u0026#34;: false, \u0026#34;IsOfflinePlayback\u0026#34;: false, \u0026#34;IsolationMode\u0026#34;: 1, \u0026#34;MessageName\u0026#34;: \u0026#34;Create\u0026#34;, \u0026#34;Mode\u0026#34;: 1, \u0026#34;OperationCreatedOn\u0026#34;: \u0026#34;/Date(1598771723000+0000)/\u0026#34;, \u0026#34;OperationId\u0026#34;: \u0026#34;b1b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;OrganizationId\u0026#34;: \u0026#34;9c5d5db0-47c7-4741-aa25-08eb4cdf59a3\u0026#34;, \u0026#34;OrganizationName\u0026#34;: \u0026#34;orgad623a9e\u0026#34;, \u0026#34;OutputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;id\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;OwningExtension\u0026#34;: { \u0026#34;Id\u0026#34;: \u0026#34;aa45df79-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;sdkmessageprocessingstep\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null }, \u0026#34;ParentContext\u0026#34;: { \u0026#34;BusinessUnitId\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;CorrelationId\u0026#34;: \u0026#34;dfe79039-5a08-4d0a-b4ee-9534625c8654\u0026#34;, \u0026#34;Depth\u0026#34;: 1, \u0026#34;InitiatingUserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;InitiatingUserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;InputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;Target\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;Entity:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;john@domain.com\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;jobtitle\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Manager\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;lastname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;firstname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;transactioncurrencyid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;785af3f0-d976-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;transactioncurrency\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;ownerid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;SuppressDuplicateDetection\u0026#34;, \u0026#34;value\u0026#34;: false } ], \u0026#34;IsExecutingOffline\u0026#34;: false, \u0026#34;IsInTransaction\u0026#34;: false, \u0026#34;IsOfflinePlayback\u0026#34;: false, \u0026#34;IsolationMode\u0026#34;: 1, \u0026#34;MessageName\u0026#34;: \u0026#34;Create\u0026#34;, \u0026#34;Mode\u0026#34;: 1, \u0026#34;OperationCreatedOn\u0026#34;: \u0026#34;/Date(1598771723000+0000)/\u0026#34;, \u0026#34;OperationId\u0026#34;: \u0026#34;b1b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;OrganizationId\u0026#34;: \u0026#34;9c5d5db0-47c7-4741-aa25-08eb4cdf59a3\u0026#34;, \u0026#34;OrganizationName\u0026#34;: \u0026#34;orgad623a9e\u0026#34;, \u0026#34;OutputParameters\u0026#34;: [], \u0026#34;OwningExtension\u0026#34;: { \u0026#34;Id\u0026#34;: \u0026#34;aa45df79-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;sdkmessageprocessingstep\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null }, \u0026#34;ParentContext\u0026#34;: null, \u0026#34;PostEntityImages\u0026#34;: [], \u0026#34;PreEntityImages\u0026#34;: [], \u0026#34;PrimaryEntityId\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;PrimaryEntityName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RequestId\u0026#34;: \u0026#34;bda24fe5-1f23-4c96-a4ab-34739a2f5628\u0026#34;, \u0026#34;SecondaryEntityName\u0026#34;: \u0026#34;none\u0026#34;, \u0026#34;SharedVariables\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;IsAutoTransact\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;DefaultsAddedFlag\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;ChangedEntityTypes\u0026#34;, \u0026#34;value\u0026#34;: [ { \u0026#34;__type\u0026#34;: \u0026#34;KeyValuePairOfstringstring:#System.Collections.Generic\u0026#34;, \u0026#34;key\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Update\u0026#34; } ] } ], \u0026#34;Stage\u0026#34;: 30, \u0026#34;UserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;UserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34; }, \u0026#34;PostEntityImages\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;AsynchronousStepPrimaryName\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;fullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: null } } ], \u0026#34;PreEntityImages\u0026#34;: [], \u0026#34;PrimaryEntityId\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;PrimaryEntityName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RequestId\u0026#34;: \u0026#34;bda24fe5-1f23-4c96-a4ab-34739a2f5628\u0026#34;, \u0026#34;SecondaryEntityName\u0026#34;: \u0026#34;none\u0026#34;, \u0026#34;SharedVariables\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;IsAutoTransact\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;DefaultsAddedFlag\u0026#34;, \u0026#34;value\u0026#34;: true } ], \u0026#34;Stage\u0026#34;: 40, \u0026#34;UserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;UserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34; } The main difference with events is how we consume them; that is, outside of the application, as opposed to inside. There are a variety of reasons why it may be desirable to do this:\nAs we saw when discussing plug-ins, there are some particular limitations that sandbox processing can impose on your custom code - such as the restricted use of third party libraries and the 2-minute maximum execution time. However, by processing these operations external from Microsoft Dataverse, developers can circumvent these restrictions while still benefitting from working with the same type of information typically available as part of a standard plug-in. For situations where your environment is processing hundreds or even thousands of record changes each hour, that then need to be processed asynchronously, events provide the most streamlined mechanism for achieving this. Also, it helps to reduce the reliance on the platform and the applications Asynchronous service in performing complex processing of these requests; instead, they can be straightforwardly \u0026ldquo;passed on\u0026rdquo; to a dedicated service responsible for this. It may be necessary to immediately trigger an external endpoint or API as soon as a user creates, updates or deletes a table row. By using events alongside Webhooks (more on this shortly), developers have a streamlined mechanism to meet this requirement. With the recent changes announced around API request and service protection limits at the platform level, developers may struggle to use the traditional plug-in assembly route to process complex operations. As such, we can realise benefits by moving this processing outside of the application and, as part of this, avoid hitting any nasty error messages resulting from an overage in the number of processed platform requests. In most cases, you will typically use one of several different Microsoft Azure services when processing events received from the application. However, there are options available to integrate alongside other cloud platforms or systems. For the exam, focusing and having an awareness of the Azure options will be essential.\nUnderstanding Service Endpoints \u0026amp; the Azure Service Bus There are two core concepts to understand alongside events - service endpoints and the Azure Service Bus:\nService Endpoints: This defines the location you wish to write your events out to for further processing. In pretty much all cases, you will want to use the Plug-in Registration Tool to create your service endpoint, but you can also deploy one programmatically via the web API. You can also use a service endpoint to receive incoming requests back into Microsoft Dataverse for processing. Once defined, you must then register the appropriate steps that will trigger your endpoint request, much in the same way as defining a plug-in step (e.g. Update on Contact, Delete on Lead, etc.). An advantage with this is that we can include pre/post table images as part of each event payload. However, the critical thing to remember is that we must specify these as asynchronous operations; synchronous calls are not allowed. In most cases, your service endpoint will target an Azure Service Bus queue, but you can also configure an Azure Event Hub endpoint as well. Azure Service Bus: Depending on the type of integration you are attempting to perform, the synchronous (i.e. all at once) processing of information may not be needed or desirable. This could be down to the single fact that the number of potential requests to process will be too high. For when this is relevant, Azure Service Bus comes into the equation by offering a decoupled, asynchronous mechanism to receive, analyse and route multiple requests to an intended destination - whether this is a database, another endpoint or a storage location. The service bus will process all events it receives in order, holding onto each request for as long as is necessary before handing it off. As well as allowing for a predictable flow of information, it can also provide assurances from a failure standpoint; if, for example, the backend endpoint goes offline for whatever reason, messages will remain in the queue and resume processing as soon as the endpoint comes back online. Developers have flexibility over the type of Azure Service Bus listener service to implement, which Microsoft outlines in this article, but the most common scenario is to use a queue. We can also define the format of events that get written out by the Azure Service Bus listener but, in most cases, outputting this as JSON will be the recommended option. Demo: Posting Microsoft Dataverse Events to Azure Service Bus In this video, we\u0026rsquo;ll walk through how to deploy out an Azure Service Bus resource and configure a service endpoint to receive requests from Microsoft Dataverse:\nWebhooks Overview A webhook is a lightweight mechanism for integrating multiple Web API\u0026rsquo;s. It operates on a publish and subscribe model; namely, we publish an event out, and an external endpoint subscribes to each event, processing it as it sees fit. As semi-automated, standard HTTP requests in their simplest form, they conform broadly to this pattern and - if they are a new concept - you should experience little difficulty in understanding them if you\u0026rsquo;ve previously worked with RESTful Web API\u0026rsquo;s. Webhooks are commonplace these days, both in the Microsoft world and across other vendors as well. For example, Azure supports the ability to write out any log alert event as a webhook to an endpoint of your choosing.\nFrom a Microsoft Dataverse perspective, developers can register custom webhooks to any endpoint of their choosing. We can handle authentication into these endpoints in one of four ways:\nHttpHeader: Here, we must declare the appropriate header key/value pair values that will allow us to authenticate into the endpoint. HttpHeader is the option you will need to go for if you use Basic or OAuth 2.0 authentication via an authorization bearer token. WebhookKey: For this option, the platform will append a query string parameter called code onto the URL, whose value then allows you to authenticate into the API. If your endpoint is an Azure Function, then this is a possible candidate option for you to consider using, as all requests targeting your function will expect this by default. HttpQueryString: Best suited for endpoints that support shared access signature (SAS) authentication, developers specify the appropriate key/value pairs for this, in the same manner as the HttpHeader option. The main difference here is that the platform will instead add these values onto the endpoint URL as query parameters. Anonymous: Finally, it is possible to call any endpoint that does not enforce authentication. To do this, ensure you select the HttpHeader authentication type and supply a single key/value pair containing anything you like; otherwise, you may get errors when registering the Webhook. Then, similar to working with service endpoints, we define the appropriate steps and Pre/Post images that will trigger the webhook call. The resulting operation will then initiate an HTTP POST request to your endpoint, containing the event data illustrated in the example earlier.\nAn important question arises around the usage of Webhooks in comparison to the Azure Service Bus. Certainly, Webhooks are the most attractive option to consider if you are integrating alongside non-Azure based services, your expected volume of API calls are low, or you need your request to execute synchronously. However, they will ultimately only be as reliable as the endpoint that you are contacting. Also, for high volume requests where we can tolerate a delay in processing, Webhooks will likely fall over pretty quickly. When this occurs, the Service Endpoint and Azure Service Bus option represent your optimal choice instead.\nFor more information on how to work with Webhooks, consult this Microsoft Docs article, which also provides some examples for you to work through.\nDemo: Registering \u0026amp; Consuming a Microsoft Dataverse Webhook from an Azure Function A common mechanism to work with Webhooks is via an Azure Function. With this in mind, check out the below video, which will show you how to consume webhook events in this manner:\nWe\u0026rsquo;re on the home stretch now, with only one more topic to look at before we wrap up the series. Next time around, we\u0026rsquo;ll look at how you can enable some specific capabilities on your Microsoft Dataverse to support data synchronisation scenarios.\n","date":"2021-06-13T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-publishing-and-consuming-events/","title":"Exam PL-400 Revision Notes: Publishing and Consuming Events"},{"content":"Welcome to the seventeenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last week, we took a deep dive look into custom connectors, a powerful feature within Power Automate and canvas Power Apps that allows us to expose our bespoke APIs for consumption within these services. Today, we finish off our discussion on the Extend the platform area of the exam by discussing how we can Use platform APIs and Process workloads. For these subjects, Microsoft expects candidates to demonstrate knowledge of the following:\nUse platform APIs\ninteract with data and processes by using the Dataverse Web API or the Organization Service implement API limit retry policies optimize for performance, concurrency, transactions, and batching query the Discovery service to discover the URL and other information for an organization perform entity metadata operations with the Web API perform authentication by using OAuth Process workloads\nprocess long-running operations by using Azure Functions configure scheduled and event-driven function triggers in Azure Functions authenticate to the Power Platform by using managed identities When we refer to the Web API, we mean the one offered out by Microsoft Dataverse, which allows developers to execute various operations targeting table rows, organisational settings or different customisation settings. As a powerful tool in a developer\u0026rsquo;s arsenal when performing integrations, having a good understanding of the API is beneficial not just for exam PL-400 but also more generally. With this in mind, let\u0026rsquo;s look at what it is and the types of things we can do with it.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nWeb API Overview There will always be situations where you need to perform complex integration activities involving Microsoft Dataverse. Wherever possible, you should be considering how to use tools such as Power Automate flows to achieve these needs, as they provide a much simpler, streamlined mechanism for accommodating what may, at first look, appear to be a problematic integration between two systems. Despite the capabilities in Power Automate flows, they do have their limits. For example, if you\u0026rsquo;re developing a bespoke .NET application that needs to update data into the system, having to pass this through a flow can make your solution unnecessarily verbose. Also, this type of implementation has several security concerns, as you\u0026rsquo;d need to expose an HTTP endpoint with a simplified security layer. There may also be situations where you need to detect various metadata properties from your Microsoft Dataverse tables so that you can replicate these into an external system that\u0026rsquo;s synchronising data. For this situation, and for where a Power Automate flow just won\u0026rsquo;t cut it, the Web API represents our most optimal route for interacting with the application.\nThe main benefit of the Web API is that it utilises open standards - namely, the Open Data Protocol (OData) version 4.0 standard - that supports a variety of different programming languages or tools. Developers can use it to perform RESTful API requests, using operations that developers should commonly understand. What\u0026rsquo;s more, working with the Web API does not require a complicated set of integrated development environments (IDE\u0026rsquo;s); indeed, we can run many Web API requests using just a modern internet browser. The Web API supports pretty much every type of CRUD (Create, Update or Delete) operation and, where necessary, requests conform to the established security model that Microsoft Dataverse provides us. For example, if the user calling the API does not have delete permissions on the Contact table, then this operation will be blocked if attempted. In short, the Web API is a powerful tool at your disposal when performing deep integrations between external systems or in meeting requirements that you cannot achieve via other features available within the Power Platform.\nHandling Authentication As with any Software as a Service (SaaS) solution, developers will need to provide a valid set of authentication credentials if they wish to work with the Web API. Much like how we access the application via the user interface, Microsoft controls Web API access via Azure Active Directory (AAD) and, more accurately, through the OAuth 2.0 protocol. As such, developers have flexible means to authenticate into the Web API, which can also satisfy any security concerns. All requests going into the Web API require a valid access token generated from AAD. To create this, developers must do a few things within their AAD tenant, including setting up an App Registration. From there, additional configuration may be required, depending on your scenario:\nIf you wish to use an interactive login prompt with your Microsoft 365 credentials, ensure that you have enabled implicit flow on your App Registration and set your grant_type to implicit. To authenticate using an Application User account, you will need to generate a secret for your App Registration and perform the appropriate setup steps within the classic Dynamics 365 interface. Then, use the grant_type value of client_credentials. I\u0026rsquo;ve discussed the process and advantages of using an Application User account previously on the blog. They should always be your first port of call over using non-interactive user accounts. Microsoft provides several different Azure Active Directory Authentication Libraries (ADAL) client libraries, which cover various platforms and provide a streamlined mechanism of generating the appropriate access tokens for authenticating. Use these to your advantage.\nDemo: Authenticating into the Microsoft Dataverse Web API To better understand the steps involved when authenticating into the Web API using the implicit grant flow, check out the video below, where I take you through each stage:\nUsing the Discovery URL For most Microsoft Dataverse deployments, an organisation will typically have multiple environments set up on the same tenant, all of which meet specific purposes - perhaps one environment for testing, another for production and a backup of production for upgrade testing. Developers, therefore, sometimes need to inspect and determine the correct environment details that they want to connect to. Also, it may be desirable for your bespoke application to automatically provide the list of all available environments to a user for selection, based on giving user credentials instead of a valid URL that the user may not know.\nTo address both of these needs, Microsoft provides developers with two specific Discovery URL\u0026rsquo;s that we can use to interrogate details about all instances that the authenticated user can access. The first of these is the global discovery URL. This URL is the recommended one that developers should be using from March 2020 onwards, mainly because, if you have a multi-region deployment of Microsoft Dataverse, details of these instances will also be returned. The URL for this is as follows:\nhttps://globaldisco.crm.dynamics.com/api/discovery/v2.0/Instances\nBy using a valid access token and accessing this endpoint via a GET request, you can list full details for all instances the user account has access to. And, because it is an OData endpoint, we can execute specific queries to return just the information we need.\nThe second URL available is known as the regional discovery URL. As its name indicates, these are scoped to a specific Microsoft Dataverse geographic location and, as such, will only return details of organisations scoped to that particular region. Other than that, the endpoint is similar to the global one but does return a reduced subset of data. In March 2020, Microsoft announced that this URL is now deprecated and has now been removed entirely.\nDemo: Working with the Microsoft Dataverse Global Discovery Endpoint In this next video, I\u0026rsquo;ll show you how to work with the Global Discovery URL to return information relating to your Microsoft Dataverse instances:\nWorking with the Web API After discovering the URL for your organisation, you can then determine the precise Web API endpoint that will accept your requests. The format of this URL will generally resemble the following:\nhttps://\u0026lt;Org Name\u0026gt;.api.\u0026lt;Region\u0026gt;.dynamics.com/api/data/v9.1/\nMicrosoft provides some further details that may assist you in constructing this for your specific scenario.\nMuch like the discovery URL\u0026rsquo;s, the Web API is a fully compliant OData v4 endpoint, providing us with a great degree of flexibility in terms of the types of requests it will accept. For example, the following URL as a GET request would return us the top 25 Account rows in the system, ordered by the Created On attribute:\nhttps://\u0026lt;Org Name\u0026gt;.api.\u0026lt;Region\u0026gt;.dynamics.com/api/data/v9.1/accounts?$top=25\u0026amp;$orderby=createdon\nHaving a good awareness of all possible types of operations supported by the Web API will be essential if you plan to take the PL-400 exam. So let\u0026rsquo;s explore some of the supported operation types:\nCreating Rows: Adding new rows of any potential table type that the user calling the API has permission to is supported. We can also create related table rows as part of the same operation, enforce any relevant duplicate detection rules and return full details of any new table row to the caller. Retrieving Rows: As we have seen already, we can return multiple different table rows, but also specific ones as well. We can even enable some valuable options, such as returning formatted values for choices/lookups, extended properties from related rows (such as a specific attribute) or return data based on an alternate key value. Update or Delete Rows: These do pretty much as you\u0026rsquo;d expect them to, but you have some convenient options available that allow you to update or remove the value of a single attribute. Also, you can perform Upsert operations as well, i.e. insert the row if it does not exist; otherwise, update it. Perform Association/Disassociation Actions: Rather than performing an Update operation, you can instead call specific methods on the API to create or remove table relationships. Merge Rows: This action allows us to merge Account, Contact, Case, and Lead rows via the Web API, using the same mechanism available within the user interface. Other table types are not supported. Call Functions: Microsoft Dataverse exposes a range of different bound and unbound functions, such as the CalculateRollupField function, most of which can be called directly from the Web API. Call Actions: We touched upon Actions briefly before in the series when looking at plug-ins. Similarly to Functions, we can call bound and unbound actions through the Web API. These provide a summary of the most common types of operations you might need to perform. It is also possible to carry out user impersonation, detect duplicate data, retrieve data from predefined table views and - as we will see shortly - perform batch operations too. Developers can also use the Web API to detect various metadata properties for tables, columns and relationships, which we\u0026rsquo;ll touch upon in a second. All this detail might seem like a lot to take in for the exam itself but, considering that this portion of the exam will work out to be approximately 5% of what you are ultimate assessed on, don\u0026rsquo;t worry yourself too much studying the minute detail of each operation type.\nDemo: Performing Common Operations with the Microsoft Dataverse API To get a good flavour on how to work with the Web API to perform core operations and also retrieve table metadata, check out the video below:\nUsing the Web API for Batch Operations For those coming from a SQL database background, you will undoubtedly be familiar with the concept of atomicity when executing a set of SQL statements. For example, let\u0026rsquo;s assume we have 3 Transact-SQL (T-SQL) statements - 2 INSERT statements and 1 UPDATE. All three of these scripts must complete successfully when run; otherwise, we could leave our database in an undesirable state. Now, we could wrap around some logic that says, deletes the 2 INSERTed records if our UPDATE statement fails, but this is like using a sledgehammer to crack a walnut. Instead, we can wrap all of the queries into a single transaction. If one statement fails, then all potential changes are rolled back, and we return to the same state the database was in before our SQL statements even touched our database. This occurrence is, in effect, an atomic transaction, a set of operations that must all succeed in unison or fail together.\nBecause Microsoft Dataverse uses SQL Server as its underlying database, transactions apply in many situations. We saw this already when looking at the event pipeline and, in particular, how records return to their previous state if a plug-in hits an error for whatever reason. To help us control how the platform performs multiple actions within a transaction, the Web API exposes the capability to run batch operations, often referred to as a changeset request. These are purely atomic in their nature and execution, allowing developers to execute up to 1000 related requests as part of the same transaction. And, if anything goes wrong as part of this, the database reverts to its previous state. The construction of a batch request must conform to a specific format (a type of text document) and be sent across to the following URL:\nhttps://\u0026lt;Org Name\u0026gt;.api.\u0026lt;Region\u0026gt;.dynamics.com/api/data/v9.1/$batch\nPretty much all of the operations discussed previously are supported as part of a batch request, and they also support some useful features such as parameters. For situations where you need to process a complex set of changes from an external system, they are a useful feature to consider using.\nDemo: Performing Batch Operations using the Web API Given the complexity around constructing batch operations, the last video below goes through this subject in detail, explaining how to build a simple changeset request and execute this against the Web API:\nGeneral Performance Tips When working with any API, it is always essential to model your requests to perform optimally. The Microsoft Dataverse Web API is no different in this regard, so keep in mind the following as you start working with it:\nTake advantage of the various OData query parameters to limit the number of rows returned when performing retrievals. Where possible, use a $filter parameter and restrict the number of rows returned via the $top or $skip options to page large result sets. You should also use $select query parameters to bring back only the columns you need. When attempting to expand related table rows via a $expand request, keep in mind the performance impact this has and the hard limit of 10 imposed by the platform. API requests are subject to the throttling and request limits imposed by the platform more generally. Be sure to read and understand the relevant Docs articles, explaining both the request limits/allocations and service protection limits that Microsoft imposes at the tenant level. These will typically be dictated based on the number of licenses you have purchased on the Microsoft 365 portal. There are several different Header values that you should always include as part of your requests, which I\u0026rsquo;ve outlined below. Doing this helps to prevent any ambiguity, should new versions of the endpoint be released in future: Accept: application/json OData-Version: 4.0 OData-MaxVersion: 4.0 As we\u0026rsquo;ve seen previously in the blog, when running client-side JavaScript from model-driven app forms, a specific method is exposed out to allow us to execute Web API requests. You should always use this method in this context and never perform the types of operations described in this article from client-side JavaScript/Typescript form functions. Using Azure Functions with Microsoft Dataverse As we saw when we looked at plug-ins earlier in the series, we have some helpful capability available there that can help us to execute code targeting the Dataverse platform and, potentially, perform batch operations. However, as noted in this post, the main limitation with plug-ins has always been the fact they execute in sandbox mode and the 2 minute maximum execution time. This restriction makes plug-ins impractical to utilise as part of long-running operations that may need to perform against the platform. As such, we must consider using other tools instead. In these situations, we can leverage some of the capability available to us in Microsoft Azure and, specifically, Azure Functions to help us out.\nAzure Functions are designed to help us run small pieces of code (\u0026ldquo;Functions\u0026rdquo;) that runs within serverless environments on Azure. They have numerous benefits, including:\nMulti-language support, such as C#, Python, JavaScript and PowerShell. Various triggering options, such as based on an HTTP call, a timer or a new item entering a service bus queue. The ability to scale dynamically, based on the workloads being processed by your Function. Flexible charging options - either pay for dedicated capacity or be billed based on actual compute consumption for your app. Support for other Azure services, such as Virtual Network integration. The ability to enable a managed identity for your Function App, which can then be granted privileges to access your Dataverse environment as an Application User. For long-running batch operations, they are an ideal candidate to consider. Keep in mind that, to use them to work with the various Microsoft Dataverse SDK libraries leveraging C#, we need to ensure that we run our Function using the .NET Framework stack and using Function version 1. Otherwise, you may find it challenging to write code that you would be familiar with from a plug-in standpoint 😉\nAnd with that, we come to the end of the Extend the platform area of the PL-400 exam. Next time, we will start diving into some of the integration options available within the platform as we review how to publish and consume events from Microsoft Dataverse.\n","date":"2021-06-06T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-working-with-the-microsoft-dataverse-web-api-and-processing-workloads/","title":"Exam PL-400 Revision Notes: Working with the Microsoft Dataverse Web API and Processing Workloads"},{"content":"Welcome to the sixteenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In the last post, we saw how developers could leverage C# code as part of plug-ins to implement complex business logic within Microsoft Dataverse. We can now examine another extensibility component within the Extend the platform section of the exam as we discuss how to Create custom connectors. For this, Microsoft expects candidates to demonstrate knowledge of the following subjects:\nCreate custom connectors\ncreate a definition for the API configure API security use policy templates to modify connector behavior at runtime expose Azure Functions as custom connectors create custom connectors for public APIs by using Postman Depending on the type of system you wish to integrate alongside the Power Platform, a custom connector may suit your needs, thereby allowing you to leverage an existing API you have constructed or an entirely new one. What\u0026rsquo;s more, developers can then take these custom connectors and expose them out to other organisations or perhaps even to every other Power Platform user the world over. In short, they are a compelling feature to leverage. So let\u0026rsquo;s take a look at what they are all about and how you can get started with them.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well in this exam.\nCustom Connectors Overview As we have seen when evaluating both canvas Power Apps and Power Automate flows, developers can use well over 325+ standard connectors, covering various Microsoft and other third-party systems, such as Salesforce, SAP and Oracle Database. This should give you everything you need to get disparate systems talking to each other within the Power Platform\u0026hellip; in most situations. Custom connectors are there to provide additional headroom for when this is not possible to achieve. Powered by Azure API Management underneath the hood, Custom Connectors provide a guided, visual mechanism of exposing an entirely custom API, alongside its various operations, as an additional connector for the following services:\nCanvas Power Apps Power Automate Flows Logic Apps Developers can then utilise these in much the same manner as the other connectors discussed previously. For example, you can define trigger actions from your API, pull in entire datasets or perform core operations, such as updating an existing record. The capabilities within your current API ultimately dictate all of this, so, in some situations, developers may need to author a new API themselves and use services such as Azure Functions or Azure Web Apps to host it. Further discussion on how to do this is beyond the scope for this blog post and, indeed, the exam itself; candidates should instead focus their attention on the various supported options for connectors and - most crucially - the outline steps for creating one, which the below diagram summarises well:\nThe sections that follow will dive into these steps in further detail\u0026hellip;\nSupported Scenarios \u0026hellip;but first, we should touch upon some of the situations where you may need to consider using a custom connector:\nIntegrating with On-Premise APIs: As custom connectors fully support using the on-premises data gateway, they provide the securest and most streamlined mechanism for making any internal APIs accessible for use within your cloud systems. Incorporating REST / SOAP APIs: It is worth noting that custom connectors only support APIs of these particular types. Specifically, canvas Power Apps and Power Automates can only work with HTTP REST APIs; Logic Apps support SOAP APIs too. For situations where your existing API does not meet these requirements, it will be impossible for you to use them with custom connectors. Sharing an API within an Organisation: For situations where you envision only contacting an API a single time within, let\u0026rsquo;s say, a Power Automate flow, using an HTTP action step may be the most prudent (and fastest) option to consider. Going beyond this, if you think you will need to share out or utilise the same action steps across multiple flows and, also, you wish to provide a simplified means of accessing the same set of actions, a custom connector will fit your needs. ISV Solution Development: Many of the custom connectors available today have their origin as internally developed ones, which have then been certified by Microsoft and published for general use. By doing this, ISV developers can help to drive further usage of their existing solutions and even introduce them to an entirely new audience. Reviewing Custom Connector Creation Options Developers have a few different options available to them when creating a custom connector. But, almost always, you will start from within the Power Apps Maker Portal. Then, by navigating to Data -\u0026gt; Custom Connectors, you can locate the appropriate option to kick off their creation:\nThe best thing about this is that we don\u0026rsquo;t necessarily need to reinvent the wheel, provided we have existing definitions of our API\u0026rsquo;s already built out. At the time of writing this post, developers can utilise the following, if available:\nPostman: If you\u0026rsquo;ve never used this tool before and find yourself regularly poking into numerous API\u0026rsquo;s, then this tool could help you out. In short, the app provides you with a local \u0026ldquo;sandbox\u0026rdquo; to mock-up, execute and inspect the results of various HTTP actions. It has numerous valuable features, such as being able to store requests in collections, handling multiple different types of authentication, and providing online synchronisation capabilities. From a custom connector standpoint, you can use Postman to build out the various action requests that your API exposes out and then import these in as a collection file into the Power Apps Maker Portal. Note as part of this that only V1 Postman collections are supported. There\u0026rsquo;s a great article on the Postman website that shows you how to get started with collections. OpenAPI: Similar in some respects to a Postman collection, an OpenAPI definition is instead intended as a complete declaration of all the capabilities stored within an API, using an open standard. Developers would typically generate this for their API to provide other developers with a valuable resource when working with the API in question. Thankfully, if you use Azure API Management as the backend for your API, you can quickly generate the appropriate specification using the Azure Portal. It is also possible to build this directly into your ASP.NET Core app if that is what your API is using; I\u0026rsquo;m sure it\u0026rsquo;s possible to do the same for other languages/stacks too, but I\u0026rsquo;ll leave that to you to find out. 😀 GitHub: Best for situations where you have built out a connector already or wish to leverage an existing, certified connector, you can very quickly bring it in from a GitHub repository of your choosing. For the options listed in the screenshot above, it\u0026rsquo;s worth focusing on each of these in more detail:\nCreate from blank: Using this option, you must define each of the individual elements of your API, such as its security, actions and triggers. If you are unsure where to start and don\u0026rsquo;t have an appropriate Postman/OpenAPI definition at your disposal, then this will be the most suitable option to choose. Import an OpenAPI file: With this, you can import the OpenAPI JSON definition file from your local computer. Import an OpenAPI from URL: If your definition is instead available as part of a publicly accessible URL, use this option to import it instead. Import a Postman Collection: This choice bears some similarities to the OpenAPI options, as it will let you import a JSON collection file generated from Postman. As mentioned earlier, make sure you export this as a V1 definition from the Postman app. Regardless of which option you select, you can then populate some standard settings for your connector, such as its display name, image and description. At this point, you can also override the URL scheme, the host URL and the Base URL, if required:\nHandling API Security A custom connector can be secured using multiple mechanisms. These authentication profiles do not store sensitive credentials within the connector. Instead, they define the authentication options that users of the connector must set when using your custom connector for the first time. In this manner, it\u0026rsquo;s possible to connect to multiple instances of your API on the same tenant, using different credentials. A custom connector can support the following four types of authentication methods:\nNo Authentication: Best for when you are exposing a publically available API that requires no underlying authentication. This option, quite obviously, provides zero security for your API. Basic Authentication: Utilising the same authentication method as the defined HTTP standard, here you outline details of the user name and password parameter labels that users must provide for your custom connector. I would typically advise against using Basic Authentication wherever possible, as it doesn\u0026rsquo;t afford the best security for your custom connector or API itself. API Key: Here, you provide the details for an API Key that the custom connector includes as part of either the header or as a query parameter on the URL itself. Many of the Azure Cognitive Services API\u0026rsquo;s use API Key\u0026rsquo;s as their authentication mechanism. API Key\u0026rsquo;s suffer from the same security failings as Basic Authentication and, as such, should be avoided unless necessary. OAuth 2.0: The recommended and most secure option for your custom connector, and also one that integrates neatly alongside Azure Active Directory (AAD) and a variety of other services, such as Google and Facebook. To see an example of how to set this up using an Azure App Registration for an Azure Function, you can consult the following Microsoft Docs article. In short, though, using AAD OAuth 2.0 requires you to set up an app registration on the appropriate tenant, which then exposes out the details you can use when defining your profile. Working with Policy Templates If you\u0026rsquo;ve had some experience working with Azure API Management or have been following my blog religiously, this topic might feel somewhat familiar. To fine-tune how your custom connector ultimately interacts with your API, you can perform various manipulation activities as each action/trigger gets fired. These are brought together within a list of re-usable templates that cover common operations that may be necessary and, from an Azure API Management standpoint, map back to the various policy definitions that this service supports. The list below provides an overview of the types of actions policy templates support:\nSet host URL: Using this, you can replace the URL for the request with an entirely new one. Route request: Let\u0026rsquo;s you route the action to a completely different path on the same URL from the one specified. Set HTTP header: Allows you to append, skip or replace a Header value as part of the API request. For example, you could supply a Cache-Control value to help improve performance when the custom connector uses your API. Set query string parameter: Allows you to append, skip or replace a query parameter value as part of the API request. For example, if your API is an OData endpoint, you might want to append a default $top query parameter to limit the number of returned results each time. At the time of writing this post, there are also a couple of additional policy templates available marked as Preview, which allows you to do things such as converting objects into arrays and vice-versa. Microsoft will not assess candidates on preview features within exams, so it is unlikely that you will need to know about these as part of your exam prep.\nDemo: Creating a Custom Connector for an Azure Function To help you better understand the process involved in building out a custom connector, check out the video below, where we will build out one that integrates alongside an Azure Function app:\nThat pretty much wraps it up for the basics on custom connectors, which should be all you need to know for the exam. In the next post, we\u0026rsquo;ll look at how you can leverage the Microsoft Dataverse Web API to perform various operations and manage more complex workloads.\n","date":"2021-05-30T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-building-a-power-platform-custom-connector/","title":"Exam PL-400 Revision Notes: Building a Power Platform Custom Connector"},{"content":"Welcome to the fifteenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In the previous post, we finished our discussion of the Extend the user experience exam area by evaluating command buttons and demonstrating how valuable the Ribbon Workbench tool is in helping us fine-tune aspects of a model-driven Power App\u0026rsquo;s interface. This topic rounded off our discussion of the Extend the user experience topic, meaning that we now move into a new section titled Extend the platform. This area has equal weighting (15-20%), and the first topic concerns how we Create a plug-in. Specifically, candidates must demonstrate knowledge of the following:\nCreate a plug-in\ndescribe the plug-in execution pipeline design and develop a plug-in debug and troubleshoot a plug-in implement business logic by using pre-images and post-images perform operations on data by using the Organization service API optimize plug-in performance register custom assemblies by using the Plug-in Registration Tool develop a plug-in that targets a custom action message Developers who have previously worked with Dynamics 365 or Dynamics CRM should have very little trouble getting to grips with plug-ins, as they have been a mainstay within these applications for well over a decade now. Notwithstanding this, it\u0026rsquo;s always helpful to get a refresher on even the most familiar of topics 😉. With that in mind, let\u0026rsquo;s dive in!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity with the platform if you want to do well in this exam. I would also recommend that you have a good general knowledge of working with the C# programming language before approaching plug-in development for the first time. Microsoft has published a whole range of introductory material to help you learn the language quickly.\nWhat is a Plug-in? In the series so far, we\u0026rsquo;ve already touched upon the following functional tools that developers may leverage when dealing with complex business requirements:\nBusiness Rules: These provide an excellent mechanism for handling simple logic, targeting both model-driven app forms and platform-level operations. Power Automate: Using flows authored by this tool, you can typically go the extra mile compared with classic workflows, thereby allowing you to integrate multiple systems when processing asynchronous actions. JavaScript: For situations where Business Rules can\u0026rsquo;t meet your particular requirement, and you need specific logic to trigger as a user is working on a model-driven form, this is the best tool in your arsenal. These are great, but there may be situations where they are unsuitable due to the sheer complexity of the business logic you are trying to implement. Also, some of the above tools do not support the ability to carry out synchronous actions (i.e. ones that happen straight away, as the user creates, updates, deletes etc, rows in the application). Finally, it may be that you need to work with some specific elements of the SDK that are not exposed out straightforwardly via alternative routes. In these situations, a custom-authored plug-in is the only solution you can turn to.\nSo what are they then? Plug-ins allow developers to write either C# or VB.NET code, which is then registered within the application as a .NET Framework class library (DLL) and executed based on specific conditions you specify within its configuration. For example, when a user creates a Contact, retrieve the details of the parent Account row and update the Contact to match. Developers will use Visual Studio when authoring a plug-in, typically using a class library project template. The Microsoft Dataverse SDK provides several modules that expose various operations that a plug-in can support. Some of the other things that plug-ins support include:\nBoth synchronous and asynchronous execution. Custom un-secure/secure configuration properties that can modify the behaviour of a plug-in at runtime. For example, by providing credentials to a specific environment to access. Pre and Post Table (previously known as Entity) Images, snapshots of how a record and its related attributes looked before and after a platform level operation occurs. For specific operations, such as Update, plug-ins can also support filtering attributes - basically, a list of defined columns that, when modified, will cause the plug-in to trigger. Being able to specify the execution order for one or multiple plug-in steps. This capability can be helpful when you need to ensure a set of steps execute in your desired order. These days, thanks to Business Rules and Power Automate, we see the lessening importance of plug-ins, and, typically, you would want to avoid using them straight out the gate. However, they still have a place and, when used appropriately, become the only mechanism you can resort to when working with complicated business logic.\nUnderstanding Messages, the Execution Pipeline \u0026amp; Steps Before we start diving into building a plug-in for the first time, it\u0026rsquo;s prudent to provide an overview of the three core concepts that every plug-in developer needs to know:\nMessages: These define a specific, platform level operation that the SDK exposes out. Some of the more commonly used Messages within the application include Create, Update or Delete. Some system tables may have their own set of unique Messages; CalculatePrice is an excellent example of this. From a plug-in perspective, developers essentially \u0026ldquo;piggyback\u0026rdquo; onto these operations as they are performed and inject their custom code. Execution Pipeline: As a user triggers a particular message, the application processes it using a defined set of stages, known as the execution pipeline. These are discussed in detail in this Microsoft Docs article, but the key ones we need to be aware of are: PreValidation: At this stage, the database transaction has not started. Also, the application has not yet performed any appropriate security checks, potentially meaning that the operation may fail if the user does not have the correct security privileges. PostValidation: Here, the database transaction has already started. The platform knows at this stage that no security constraints will prevent the operation from completing, but the transaction may still fail for other reasons. PostOperation: Although the database transaction has still not completed at this stage, the core operation of the message (executed within the MainOperation stage) will almost certainly commit successfully, assuming no errors occur via a custom plug-in. A failure at any of these stages will cause the database transaction to roll back entirely, returning any affected row(s) to their original state. Now, from a plug-in perspective, the execution pipeline is exposed out to developers as stages where your custom code can execute. This can provide developers with a high degree of flexibility and capability when running their custom code. As a general rule of thumb, developers would use each of these stages under the following circumstances: PreValidation: Use this stage to perform checks to cancel the operation. PostValidation: This stage is handy for modifying any values before they hit the database. Doing this at this stage would also prevent triggering another platform Message. PostOperation: Use this stage for when you need to carry out additional logic not related to the current record or potentially provide additional information back to the caller after the platform completes the core operation. Developers will typically need to give some thought towards the execution pipeline and the most appropriate one to select, based on the requirements being worked with. Steps: Simply writing a plug-in class and deploying out the library is not sufficient to trigger your custom logic. Developers must also provide a set of \u0026ldquo;instructions\u0026rdquo;, commonly known as Steps, that tell the application when and where to execute your custom code. It is here where both the Message and Execution Pipeline come into play, and you would always specify this information when creating a Step. Additional info you can include here includes the execution order, whether the plug-in will execute synchronously or not and the display name of the Step when viewed within the application. We will see shortly how these topics come into play as part of using the Plug-in Registration Tool.\nBuilding Your First Plug-In: Pre-Requisites Before you start thinking about developing a plug-in, you need to make sure that you have a few things installed onto your development machine:\nVisual Studio: I would recommend using Visual Studio 2019 where possible. If you don\u0026rsquo;t have a Visual Studio/MSDN subscription, we can use the Community Edition instead. A Microsoft Dataverse Environment: Because where else are you going to deploy out and test your plug-in? 😀 Knowledge of C# / VB.NET: Attempting to write a plug-in for the first time without at least a basic grasp of one of these languages will impede your progress. Plug-in Registration Tool: To deploy your plug-in out, you will need access to this tool as well. We\u0026rsquo;ll cover this off later in the post. Demo: Creating a Microsoft Dataverse Plug-in using Visual Studio 2019 \u0026amp; C# The best way to learn how to create a plug-in is to see someone build one from scratch. In the YouTube video below, I talk through how to build a straightforward plug-in using Visual Studio 2019 and C#:\nFor those who would prefer to read a set of instructions, then this Microsoft Docs article provides a separate tutorial you can follow instead.\nUsing the Plug-in Registration Tool Once you\u0026rsquo;ve written your first plug-in, you then need to consider how to deploy this out. In most cases, you will use the Plug-in Registration Tool to accomplish this. Available on NuGet, this lightweight application supports the following features:\nVaried access options when working with multiple Dataverse environments. The ability to register one or multiple plug-in assemblies. The registering of plug-in steps and images, including the various settings, discussed earlier. Via the tool, you can install the Plug-in Profiler, an essential tool for remote debugging your plug-ins; more on this capability later. The Plug-In Registration Tool is also necessary for deploying other extensibility components, including Service Endpoints, WebHooks or Custom Data Providers. We will touch upon some of these later on in the series. For this topic area and the exam, you must have a good general awareness of deploying and updating existing plug-in assemblies.\nDemo: Deploying a Microsoft Dataverse Plug-in using the Plug-in Registration Tool In this next video, I\u0026rsquo;ll show you how to take the plug-in developed as part of the previous video and deploy it out using the Plug-in Registration Tool:\nThere is also a corresponding Microsoft Docs tutorial that covers these steps too.\nDebugging Options for Plug-ins Plug-ins deployed out into Microsoft Dataverse must always run within sandbox execution mode. As a result, this imposes a couple of limitations (some of which I\u0026rsquo;ll highlight in detail later on), the main one being that it greatly hinders the ability to debug deployed plug-ins easily. To get around this, Microsoft has provided two mechanisms that developers can leverage:\nPlug-in Trace Logging: Using this, developers can write out custom log messages into the application at any point in their code. This can be useful in identifying the precise location where a plug-in is not working as expected, as you can output specific values to the log for further inspection. You can also utilise them to provide more accurate error messages for your code that you would not necessarily wish to show users as part of a dialog box. Getting to grips with trace logging is easy - it\u0026rsquo;s just a few lines of code that you need to add to your project - and you can find out more about how to work with it on the Microsoft Docs site. Plug-in Registration Tool \u0026amp; Profiling: While trace logging is undoubtedly useful, there will be situations where you need something more. For example, it may become desirable to breakpoint code, inspect values/properties as they are processed, and determine when your plug-in hits specific conditions or error messages. The Profiler comes into play for these situations by allowing developers to \u0026ldquo;playback\u0026rdquo; their code execution using Visual Studio and the Plug-in Registration Tool. We mentioned the Plug-in Profiler tool earlier, which is a mandatory requirement and must be deployed to your instance first. From there, you can generate a profile file that allows you to rewind execution within Visual Studio. Developers will typically use both of these debugging methods in tandem to figure out issues with their code. The first option provides a friendly, discreet mechanism of inspecting how a plug-in has got on. The Profiler acts as the \u0026ldquo;nuclear\u0026rdquo; option when we cannot discern the problem easily via trace logging alone. Understanding the benefits/disadvantages of both and how to use them will be essential as part of your exam preparation. With this in mind, check out the two demo videos below that show you how to work with these features in-depth:\nDemo: Debugging a Microsoft Dataverse Plug-in using Trace Logging Demo: Debugging a Microsoft Dataverse Plug-in Using the Plug-in Registration Tool General Performance / Optimization Tips Anyone can build and deploy a plug-in, but it can take some time before you can do this well. Here are a few tips that you should always follow to ensure your plug-ins perform well when deployed out:\nKeep in mind some of the limitations as part of sandbox execution for your plug-ins, including: The 2 minute limit on execution time. The inability to use specific 3rd party DLL\u0026rsquo;s, such as Newtonsoft.Json Various restrictions around accessing operating system level information, such as directories, system state etc. For situations where sandbox limitation will cause issues in executing your business logic, you will need to consider moving away from a plug-in and adopting another solution. Filtering attributes provide a great way of ensuring your code only executes when you need it to. You should always use these wherever possible. Make sure to disable any plug-in profiling and remove the Profiler solution once you are finished. Active profiles can considerably slow down performance, and the Profile solution can also introduce unintended dependencies on core tables, causing difficulties when moving changes as part of a solution file. The Solution Checker provides an excellent mechanism for quality checking your code and can give some constructive recommendations on where your plug-in can be improved. Running this at least once before moving your plug-in out into other environments is highly recommended. Read through the following Microsoft Docs article and take care to follow the suggestions it outlines. Don\u0026rsquo;t Forget\u0026hellip;Custom Actions \u0026amp; Global Discovery Service Endpoint As Microsoft has included them in the exam specification, it\u0026rsquo;s worth talking about these two topics briefly. However, only a general awareness of them should be sufficient, and I wouldn\u0026rsquo;t devote too much of your revision time to them.\nWe\u0026rsquo;ve already looked at Messages in-depth and seen how they can act as a \u0026ldquo;gateway\u0026rdquo; for developers to bolt on specific logic when an application-level event occurs. Custom Actions allow you to take this a step further by creating SDK/API exposable, custom Messages. For example, you could combine the Create/Update Messages of the Lead/Opportunity tables into a new message called SalesProcess. Plug-ins or operations targeting the Web API could then work with or trigger actions based on this. Custom Actions have been available within the application for many years now, and many developers continually argue them as being one of the most underrated features in Microsoft Dataverse. They are also fully supported for use as part of Power Automate flows too. In short, they can be incredibly useful if you need to group multiple default messages into a single action that can then be called instead of each one.\nFinally, it is worth touching upon how developers use the Global Discovery Endpoint from a plug-in standpoint, but we must first outline the concepts of early-binding and late binding. In the video demos above, I wrote all of the code using the late-binding mechanism. This means that instead of declaring an object for the specific table I wanted to work with (such as Contact), I instead used the generic Entity class and told the code which attributes I wanted to work with by specifying their logical names as strings. This is fine and gives me some degree of flexibility, but it ultimately means that I won\u0026rsquo;t detect any issues with my code (such as an incorrect field name) until runtime. Also, I have no easy way to tell how my table looks using Intellisense; instead, I must continuously refer to the application to view these properties. To get around this, we can use a mechanism known as early-binding, where all of the appropriate table structures are generated within the project and referenced accordingly. The CrmSvcUtil.exe application provides a streamlined means of creating these early-bound classes, and, as you might have guessed, you need to use the Global Discovery Endpoint to generate these classes successfully. There have been many previous wars and mounds of dead developers debates online regarding which mechanism is better. While early-binding does afford some great benefits, it does add some overhead into your development cycle, as you must continuously run the CrmSvcUtil application each time your tables change within Microsoft Dataverse. All I would recommend is to try both options, identify the one that works best for you and, most importantly, adopt your preferred solution consistently.\nPlug-ins can help when modelling out complicated business logic that is impossible to achieve via other functional tools within the Power Platform. I hope this post has provided a good insight into their capabilities and to help you in your revision. In the next post, I\u0026rsquo;ll show you how you can extend Power Automate and Power Apps via custom API connectors, thereby allowing you to connect to a myriad of different systems.\n","date":"2021-05-23T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-building-deploying-debugging-plug-ins-using-c/","title":"Exam PL-400 Revision Notes: Building, Deploying \u0026 Debugging Plug-ins using C#"},{"content":"Data Engineering within Microsoft Azure is a vast subject. Long gone are the days where we had to concern ourselves with traditional tools such as SQL databases or stick to a single vendor/stack of our choosing. Now, to build the modern, cloud-first applications organisations require on Microsoft Azure, which typically involve processing lots of data, having a solid grasp of the following tools becomes necessary:\nSynapse Analytics - For situations where we need a data warehousing solution, Synapse Analytics is the natural candidate to consider. In addition, it very much behaves just like a traditional SQL Server database, meaning we know what to expect \u0026ldquo;out of the box\u0026rdquo;. Cosmos DB - Typically best for semi-structured or data with no structure at all, Cosmos DB has numerous different API\u0026rsquo;s available, such as one for SQL and MongoDB. This means it becomes straightforward to migrate our existing applications across into it and, in the process, scale up to run things on a global scale. Data Factory - A tool that I have a lot of fondness for, and is designed to help implement your Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) processes at scale. As the natural successor for SQL Server Integration Services, it also has native support for running packages authored by this tool as well. Databricks - For when Data Factory can\u0026rsquo;t perform the type of processing or transformation you need for your application, we can scale up fully managed compute environments using Databricks and execute simple or complex scripts using Python, Java or R. Storage - Typically, data engineers need to utilise blob, file or Data Lake storage alongside the aforementioned tools. Therefore, having a complete understanding of what\u0026rsquo;s available across the Azure stack is essential. Stream Analytics - Designed for situations where you have consistent application data that you need to process in real-time, Stream Analytics has various options available to help get your data where it needs to be. Previously, if we were looking to validate our skills in the above technologies, we would have to sit two exams to earn our cred and, in the process, a shiny new Azure Data Engineer Associate certification:\nExam DP-200: Implementing an Azure Data Solution Exam DP-201: Designing an Azure Data Solution To help simplify this journey, Microsoft will shortly retire these exams at the end of June 2021 and replace them with a single new exam instead - DP-203: Data Engineering on Microsoft Azure. The exam has just recently come out of public beta, meaning now is an excellent opportunity to go for it. As well as grasping the topics above, candidates need also to demonstrate expertise in:\nCore design concepts, including designing data storage, partitioning your data, and the types of physical data storage structures to leverage. Data ingestion mechanisms, leveraging a wide array of potential data formats, such as JSON, Parquet, CSV files and more Identifying the best tool to use for the type of data you wish to consume. For example, Stream Analytics is the natural tool to consider if you are processing data from a set of Internet of Thing (IoT) devices or similar. Securing the various storage and ingestion tools we may leverage on Azure, using role-based access controls, data encryption and Azure Active Directory (where appropriate) Choosing and utilising the best tool to monitor the various services we deploy out to Azure and manage common issues, such as pipeline failures within Azure Data Factory. To give you a perspective, I found a lot of this a real struggle when I first took the DP-200 and DP-201 exams last year. My comfort zone ends just as we start to move away from Data Factory so, while it was interesting to learn about new things like Synapse Analytics, it was a lot to take in. So much so that I busted out and failed the DP-200 exam the first time 😥. I finally managed to pass it in the end and, through some miracle, managed to get a passing grade in the DP-203 exam too. A miracle, I\u0026rsquo;m sure! 😅 But don\u0026rsquo;t let any of this dissuade you. We\u0026rsquo;ve got some excellent learning tools at our disposal via the Microsoft Learn site that provides up-to-date and relevant content to help you understand each topic, put it into practice and be in a great position to pass the exam. If you\u0026rsquo;re going for the DP-203 exam yourself, I hope you\u0026rsquo;ve found this post helpful and feel free to leave a comment below if you have any questions regarding it. 😀\n","date":"2021-05-16T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/getting-certified-in-azure-data-engineering-an-overview-of-the-dp-203-exam/","title":"Getting Certified in Azure Data Engineering: An Overview of the DP-203 Exam"},{"content":"Welcome to the fourteenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last week, we took a deep-dive look into Power Apps Component Framework (PCF) controls and how we can use them to extend the user interface within a Power App. PCF controls work great for situations where we cannot utilise JavaScript form functions or a canvas Power App to provide a more bespoke, intuitive user experience with model-driven apps. As valuable as these control types are, they do not allow us to modify the behaviour of the various ribbon buttons available within model-driven Power Apps, an example of which can be seen below for the Lead table:\nIn most cases, these provide us with a range of functionality that is best left unmodified. However, there will be circumstances where you may need to add or remove ribbon buttons or tailor the behaviour of an existing button to perform a different action. All of these tasks fall neatly into the Create a command button function area of the PL-400 exam, where Microsoft expects candidates to demonstrate knowledge of the following topics:\nCreate a command button function\ncreate the command function design command button rules and actions edit the command bar by using the Ribbon Workbench manage dependencies between JavaScript libraries Using the right tools, it\u0026rsquo;s pretty straightforward to perform simple or even complex amends to ribbon command buttons. So let\u0026rsquo;s dive and see what\u0026rsquo;s involved!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nRibbon Overview We\u0026rsquo;ve already touched upon what a command button is as part of this posts intro. Before we dive in any further, though, it is essential first to explain them within the context of the ribbon. This feature has been a mainstay for model-driven Power Apps over many years. Both then and now, it provides us with the means of changing how the various buttons within an app display and also operate when users interact with them. The application displays different ribbons within multiple areas, including:\nTable row forms Views and Subgrids Specific areas that rely on the \u0026ldquo;classic\u0026rdquo; interface Within the Dynamics 365 for Outlook App Microsoft Dataverse utilises a single ribbon definition per table, defined as an XML document within the application. You can see how a condensed example looks for the Account table by default below; you can find out how to export the complete definitions for all tables from the Microsoft Docs website:\n\u0026lt;RibbonDefinitions\u0026gt; \u0026lt;RibbonDefinition\u0026gt; \u0026lt;UI\u0026gt; \u0026lt;Ribbon\u0026gt; \u0026lt;Tabs Id=\u0026#34;Mscrm.Tabs\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MainTab\u0026#34; Title=\u0026#34;Accounts\u0026#34; Description=\u0026#34;Accounts\u0026#34; Sequence=\u0026#34;100\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;LargeMediumLargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeLargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Find.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Find\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp\u0026#34; Sequence=\u0026#34;61\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;LargeSmallLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;100\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;110\u0026#34; Size=\u0026#34;LargeMediumMediumLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;120\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;130\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Scale.3\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;140\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;150\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;160\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;170\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;10\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.FourOverflow\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewRecord\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.NewRecordFromGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewRecordForBPFEntity\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.NewRecordForBPFEntity\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/NewRecord_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Edit\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Edit\u0026#34; Command=\u0026#34;Mscrm.EditSelectedRecord\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Edit_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/edit32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Edit\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Activate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Activate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Activate\u0026#34; Sequence=\u0026#34;30\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Activate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Activate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Activate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Deactivate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Deactivate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Deactivate\u0026#34; Sequence=\u0026#34;40\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Deactivate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Deactivate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeActivate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.OpenActiveStage\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_Form_Other_MainTab_OpenActiveStage_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Form.Tooltip.OpenActiveStage\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.OpenActiveStage\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Form.MainTab.OpenActiveStage\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Form.MainTab.OpenActiveStage\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/formdesign16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/EditForm_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;FormDesign\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DeleteSplitButtonCommand\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Remove\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Delete\u0026#34; Command=\u0026#34;Mscrm.DeleteSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.BulkDelete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.BulkDelete\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BulkDelete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BulkDeleteWizard_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; Sequence=\u0026#34;59\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;MergeRecords\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Detect\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.DetectDuplicates\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupes\u0026#34; Sequence=\u0026#34;60\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectDuplicates_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DeleteSelected_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DeleteSelected_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_Selected_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_Selected_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.All\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesAll\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectAll_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DetectAll_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_All_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_HomepageGrid_EntityLogicalName_MainTab_Management_Detect_All_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.ChangeDataSetControlButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; ToolTipDescription=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Command=\u0026#34;Mscrm.ChangeControlCommand\u0026#34; Sequence=\u0026#34;25\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; Alt=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.ChangeControlCommand\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Alt=\u0026#34;$LocLabels:GuidedHelp.Alt\u0026#34; Command=\u0026#34;loadGuidedHelp\u0026#34; Description=\u0026#34;Learning Path\u0026#34; Id=\u0026#34;GuidedHelpaccount.Grid\u0026#34; LabelText=\u0026#34;$LocLabels:GuidedHelp.LabelText\u0026#34; Sequence=\u0026#34;70\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:GuidedHelp.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:GuidedHelp.ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Alt=\u0026#34;$LocLabels:LPLibrary.Alt\u0026#34; Command=\u0026#34;launchLPLibrary\u0026#34; Description=\u0026#34;Learning Path Library\u0026#34; Id=\u0026#34;LPLibraryaccount.Grid\u0026#34; LabelText=\u0026#34;$LocLabels:LPLibrary.LabelText\u0026#34; Sequence=\u0026#34;80\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:LPLibrary.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:LPLibrary.ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ModernClient\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;11\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ModernClient.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RefreshModernButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; Command=\u0026#34;Mscrm.Modern.refreshCommand\u0026#34; ModernCommandType=\u0026#34;ControlCommand\u0026#34; Sequence=\u0026#34;17\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; ModernImage=\u0026#34;Refresh\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NavigateToHomepageGrid\u0026#34; ToolTipTitle=\u0026#34;$Resources:OpenAllRecordsViewImageButtonText\u0026#34; ToolTipDescription=\u0026#34;$Resources:OpenAllRecordsViewImageButtonToolTip\u0026#34; Command=\u0026#34;Mscrm.NavigateToHomepageGrid\u0026#34; Sequence=\u0026#34;18\u0026#34; LabelText=\u0026#34;$Resources:OpenAllRecordsViewImageButtonText\u0026#34; ModernImage=\u0026#34;TableGroup\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ActionButtonForMSTeams\u0026#34; Command=\u0026#34;Mscrm.HomePageGrid.MSTeamsViewCollaborateCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:OfficeProductivity.MSTeamsToolTip\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:OfficeProductivity.MSTeamsToolTip\u0026#34; LabelText=\u0026#34;$LocLabels:OfficeProductivity.MSTeams\u0026#34; Alt=\u0026#34;$LocLabels:OfficeProductivity.MSTeams\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Sequence=\u0026#34;1028\u0026#34; ModernImage=\u0026#34;MSTeamsIcon\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Actions_32.png\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Controls\u0026#34;\u0026gt; \u0026lt;Button Sequence=\u0026#34;10\u0026#34; Id=\u0026#34;msdyn.HomepageGrid.account.LaunchPlaybook.Button\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;$webresource:Playbook/msdyn_/Images/SVG/PlaybookInstanceIcon.svg\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; Command=\u0026#34;Playbook.HomepageGrid.Launch\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.ToolTip.LaunchPlabyook\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ViewOrgChart\u0026#34; Command=\u0026#34;LinkedInExtensions.ViewOrgChartForGrid\u0026#34; Sequence=\u0026#34;52\u0026#34; Alt=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart\u0026#34; LabelText=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart.ToolTipDesc\u0026#34; ModernImage=\u0026#34;Drilldown\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Collaborate\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SendDirectEmail\u0026#34; Command=\u0026#34;Mscrm.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.modern.SendDirectEmail\u0026#34; Command=\u0026#34;Mscrm.modern.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddToList\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddToMarketingList\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;BulletListAdd\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Assign\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Assign\u0026#34; Command=\u0026#34;Mscrm.AssignSelectedRecord\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Assign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Assign\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Sharing\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Share\u0026#34; Command=\u0026#34;Mscrm.ShareSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Share_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Sharing_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Share\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ViewHierarchy\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_MainTab_Actions_ViewHierarchy_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.ViewHierarchyForSelectedRecord\u0026#34; Sequence=\u0026#34;55\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; Image16by16=\u0026#34;/_imgs/Hierarchy.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Hierarchy_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;ViewHierarchy\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Copy\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Copy_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected.EnabledInIEBrowser\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Copy_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Copy_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Selected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Copy_Selected_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/copyshortcut16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/copyshortcut32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.View\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CopyShortcut_View\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CopyView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CopyView_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Send\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Send_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected.AlwaysEnabled\u0026#34; Sequence=\u0026#34;61\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Send.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Selected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Send.View\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut_View\u0026#34; Command=\u0026#34;Mscrm.SendShortcutView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendView_32.png\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddConnection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddConnection_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;Connection\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnectionNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ModernImage=\u0026#34;ConnectionToOther\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnectionToMe\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionToMeGrid\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ModernImage=\u0026#34;ConnectionToMe\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddToQueue\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Mscrm_HomepageGrid_EntityLogicalName_MainTab_Actions_AddToQueue_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToQueue\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddToQueue_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddToQueue_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;AddToQueue\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.FollowButton\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.FollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Follow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003.png\u0026#34; Sequence=\u0026#34;1000\u0026#34; ModernImage=\u0026#34;RatingEmpty\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.UnfollowButton\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.UnfollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Unfollow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003_u.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003_u.png\u0026#34; Sequence=\u0026#34;1020\u0026#34; ModernImage=\u0026#34;RatingFull\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;40\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RunWorkflow\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.RunWorkflowSelected\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/StartWorkflow_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RunScript\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunScript\u0026#34; Command=\u0026#34;Mscrm.RunInteractiveWorkflowSelected\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/startdialog_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/startdialog_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar\u0026#34; Sequence=\u0026#34;60\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunFlow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunFlow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows.ManageRunFlow\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.Flows\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Flows\u0026#34; Sequence=\u0026#34;70\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.ManageFlows\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.ManageFlows\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.ManageFlows\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.ManageFlows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.ManageFlows\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateStaticFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.RunFlow\u0026#34; Sequence=\u0026#34;20\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunFlow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunFlow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.RunFlow\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.RunFlow\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.RunWorkflow\u0026#34; Sequence=\u0026#34;30\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.RunWorkflow\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateWorkFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;50\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ExportData\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.RunReport\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.RunReport\u0026#34; Command=\u0026#34;Mscrm.ReportMenu.Grid\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.ReportsMenu.Populate.Grid\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunReport_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Report\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.DocumentTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DocumentTemplate\u0026#34; Command=\u0026#34;Mscrm.DocumentTemplate.Templates\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DocumentTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;35\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DocumentTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsExcelTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;DocumentTemplates\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.WordTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.WordTemplate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.WordTemplate\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.HomepageGrid.WordTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;36\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/WordTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsWordTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;WordTemplates\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcelOnline\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcelOnline\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.Online\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheetAll\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExportAll\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.AllStaticXlsx\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.StaticXlsx\u0026#34; Sequence=\u0026#34;42\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicWorkesheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.DynamicXlsx\u0026#34; Sequence=\u0026#34;43\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicPivotTable\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicPivotTable\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.PivotXlsx\u0026#34; Sequence=\u0026#34;44\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportSelectedToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportSelectedToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportSelectedToExcel\u0026#34; Sequence=\u0026#34;230\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel\u0026#34; Command=\u0026#34;Mscrm.ImportDataFromExcel\u0026#34; Sequence=\u0026#34;21\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ImportFromExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ImportFromExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportFromExcel\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromCSV\u0026#34; Command=\u0026#34;Mscrm.ImportDataFromCSV\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ImportFromCSV\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ImportFromCSV\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportFromCSV\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Import\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportDataSplitButton\u0026#34; Command=\u0026#34;Mscrm.ImportDataSplitButton\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Import16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/importdata32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Import.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Import.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Import.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ImportData\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.ImportData\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ImportData_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/importdata32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.ExportDataTemplate\u0026#34; Command=\u0026#34;Mscrm.ExportDataTemplate\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ExportTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ExportTemplate_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.HomepageGrid.account.MainFilters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AdvancedFind\u0026#34; Command=\u0026#34;Mscrm.OpenGridAdvancedFind\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.AdvancedFind.TooltipDescription\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AdvancedFind_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/advancedfind32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Meqf\u0026#34; Command=\u0026#34;Mscrm.OpenMultipleEntityQuickFindSearch\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; ToolTipTitle=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MultipleEntityQuickFind.TooltipDescription\u0026#34; Alt=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/search_normal.gif\u0026#34; Image32by32=\u0026#34;/_imgs/search_normal.gif\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp\u0026#34; Command=\u0026#34;Mscrm.OutlookHelp\u0026#34; Sequence=\u0026#34;70\u0026#34; Title=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Help\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_Jewel_Help_Flyout_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.OutlookHelp\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Help_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Help_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.account.View\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.View\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.View.TabName\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.View.TabName\u0026#34; Sequence=\u0026#34;110\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.account.View.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.3\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.account.View.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Command=\u0026#34;Mscrm.FiltersGroup\u0026#34; Sequence=\u0026#34;11\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveAsDefaultGridView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Filters_SaveAsDefaultGridView_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveAsDefaultGridView\u0026#34; Command=\u0026#34;Mscrm.SaveAsDefaultGridView\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridViewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveViewAsDefault_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.CustomizePreviewPane\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.CustomizePreviewPane\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CustomizePreviewPane\u0026#34; Command=\u0026#34;Mscrm.CustomizePreviewPane\u0026#34; Sequence=\u0026#34;21\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.CustomizePreviewPane\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CustomPreviewPane_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CustomPreviewPane_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.HomepageGrid.account.ViewFilters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Data.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;23\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Data.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveToCurrent\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Filters_SaveToCurrent_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToCurrentView\u0026#34; Command=\u0026#34;Mscrm.SaveToCurrentView\u0026#34; Sequence=\u0026#34;27\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrent\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrentToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savefilters16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefilters32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveAsNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNew\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToNewView\u0026#34; Command=\u0026#34;Mscrm.SaveAsNewView\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNew\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveFiltersAsNewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefiltersasview32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewViewTooltip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.HomepageGrid.View.Grid.NewViewTooltipDescription\u0026#34; Command=\u0026#34;Mscrm.NewPersonalView\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewView\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/NewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/NewView_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Refresh_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RefreshButton\u0026#34; Command=\u0026#34;Mscrm.RefreshGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.Refresh\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.Refresh\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Refresh16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Refresh_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Grid_Refresh_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Grid_Refresh_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/Tabs\u0026gt; \u0026lt;ContextualTabs Id=\u0026#34;Mscrm.ContextualTabs\u0026#34;\u0026gt; \u0026lt;ContextualGroup Id=\u0026#34;Mscrm.VisualizationTools\u0026#34; Command=\u0026#34;Mscrm.VisualizationTools.Command\u0026#34; Color=\u0026#34;Orange\u0026#34; ContextualGroupId=\u0026#34;Mscrm.VisualizationTools\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTools.FlareHeading\u0026#34; Sequence=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.Command\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.TabHeading\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;MediumMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;70\u0026#34; Size=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;90\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;100\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Save_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.OneLargeTwoMedium\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Save\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.SaveChart\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Save_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.SaveAndClose\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.SaveAndCloseChart\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/FormEditorRibbon/SaveAndClose_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAndCloseChart_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Copy\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.CopyChart\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveAsChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/saveaschart32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.ExpandChart\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ExpandChart\u0026#34; Sequence=\u0026#34;35\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ExpandChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/expandchart32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ChartsGroup\u0026#34; Sequence=\u0026#34;30\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.VisualizationDesigner.Charts\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.ColumnFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.ColumnFlyout\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ColumnChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ColumnChart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Column\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Column\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.StackedColumn\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedColumn\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.StackedColumn100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedColumn100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.BarFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.BarFlyout\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar.Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BarChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BarChart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Bar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Bar\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.StackedBar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedBar\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.StackedBar100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedBar100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.AreaFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.AreaFlyout\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area.Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/areaChart_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/areaChart_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Area\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Area\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.StackedArea\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedArea\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.StackedArea100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedArea100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Line\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.LineChart\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/linechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/linechart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Pie\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.PieChart\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/piechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/piechart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Other\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.FunnelChart\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/funnelchart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/funnelchart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ChartsGroup\u0026#34; Sequence=\u0026#34;40\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/placeholders/ribbon_placeholder_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.ThreeLargeThreeMedium\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.TopFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopFlyout\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/topRules_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/topRules_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top3\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Top3\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top5\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Top5\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.TopX\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.TopX\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.BottomFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.BottomFlyout\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/bottomRules_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/bottomRules_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom3\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Bottom3\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom5\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Bottom5\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.BottomX\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.BottomX\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Clear\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Clear\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/clearRules_16.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;70\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Close_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.Close\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.CloseDesigner\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Close_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Close_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/ContextualGroup\u0026gt; \u0026lt;ContextualGroup Id=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Command=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Color=\u0026#34;LightBlue\u0026#34; ContextualGroupId=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Title=\u0026#34;$Resources:Ribbon.SubGridFlare\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.SubGrid.account.MainTab\u0026#34; Command=\u0026#34;Mscrm.SubGrid.account.MainTab\u0026#34; Title=\u0026#34;Accounts\u0026#34; Description=\u0026#34;Account\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;LargeMediumLargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;LargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;90\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;110\u0026#34; Size=\u0026#34;LargeSmallLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;120\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;130\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;140\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;150\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;170\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;180\u0026#34; Size=\u0026#34;Popup\u0026#34; PopupSize=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;190\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Sequence=\u0026#34;200\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;210\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;220\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;10\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;MailApp.SubGrid.SetRegarding.account.Button\u0026#34; Command=\u0026#34;MailApp.SubGrid.SetRegardingCommand\u0026#34; Sequence=\u0026#34;1\u0026#34; LabelText=\u0026#34;$LocLabels:MailApp.SubGrid.SetRegarding.Button.Label\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:MailApp.SubGrid.SetRegarding.Button.ToolTip\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;LinkArticle\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.NewRecord\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.NewRecordFromGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddListMember\u0026#34; Command=\u0026#34;Mscrm.AddMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; ModernImage=\u0026#34;BulletListAdd\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.OpenAssociatedGridViewStandard\u0026#34; Command=\u0026#34;Mscrm.OpenAssociatedGridViewOnLiteGridStandard\u0026#34; Sequence=\u0026#34;15\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.OpenAssociatedGridView\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.OpenAssociatedGridView\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/OpenAssociatedGridView16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/OpenAssociatedGridView32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_OpenAssociatedGridViewStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_OpenAssociatedGridViewStandard_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddNewStandard\u0026#34; Command=\u0026#34;Mscrm.AddNewRecordFromSubGridStandard\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddNew\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddNew\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddNewStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddNewStandard_ToolTipDescription\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddExistingStandard\u0026#34; Command=\u0026#34;Mscrm.AddExistingRecordFromSubGridStandard\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddExistingStandard_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddExistingStandard_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingStandard_ToolTipDescription\u0026#34; ModernImage=\u0026#34;AddExisting\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddExistingAssoc\u0026#34; Command=\u0026#34;Mscrm.AddExistingRecordFromSubGridAssociated\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddExistingStandard_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddExistingStandard_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingAssoc_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingAssoc_ToolTipDescription\u0026#34; ModernImage=\u0026#34;AddExisting\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Edit\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Edit\u0026#34; Command=\u0026#34;Mscrm.EditSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Edit_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/edit32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Edit\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Activate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Activate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Activate\u0026#34; Sequence=\u0026#34;60\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Activate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Activate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Activate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Deactivate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Deactivate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Deactivate\u0026#34; Sequence=\u0026#34;70\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Deactivate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Deactivate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeActivate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Delete\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; Command=\u0026#34;Mscrm.DeleteSelectedRecord\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Remove\u0026#34; Command=\u0026#34;Mscrm.RemoveSelectedRecord\u0026#34; Sequence=\u0026#34;90\u0026#34; LabelText=\u0026#34;$Resources:MenuItem_Label_Remove\u0026#34; Alt=\u0026#34;$Resources:MenuItem_Label_Remove\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Remove_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Remove_ToolTipDescription\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.BulkDelete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.BulkDelete\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BulkDelete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BulkDelete_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeleteBulk\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.MergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; Sequence=\u0026#34;109\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;MergeRecords\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.Detect\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.DetectDuplicates\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupes\u0026#34; Sequence=\u0026#34;110\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DuplicateDetection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.Detect.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.Detect.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.Detect.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Detect.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SelectedRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_Selected_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_Selected_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Detect.All\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesAll\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectAll_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DetectAll_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_All_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_Detect_All_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.ChangeDataSetControlButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; ToolTipDescription=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Command=\u0026#34;Mscrm.ChangeControlCommand\u0026#34; Sequence=\u0026#34;25\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; Alt=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.ChangeControlCommand\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ModernClient\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;11\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ModernClient.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RefreshButton\u0026#34; Command=\u0026#34;Mscrm.Modern.refreshCommand\u0026#34; ModernCommandType=\u0026#34;ControlCommand\u0026#34; Sequence=\u0026#34;17\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; ModernImage=\u0026#34;Refresh\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Actions_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CreateOpportunityForMembers\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers.ToolTip\u0026#34; Command=\u0026#34;Mscrm.CreateOpportunityForMembers\u0026#34; Sequence=\u0026#34;70\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/SFA/CreateOpportunityForMembers_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/SFA/CreateOpportunityForMembers_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;OpportunitiesList\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Collaborate\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddEmail\u0026#34; Command=\u0026#34;Mscrm.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.modern.AddEmail\u0026#34; Command=\u0026#34;Mscrm.modern.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddToList\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddToMarketingList\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CopyListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.CopyListMember\u0026#34; Command=\u0026#34;Mscrm.CopyListMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; ModernImage=\u0026#34;AddMembers\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RemoveListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.RemoveListMember\u0026#34; Command=\u0026#34;Mscrm.RemoveMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AdvMergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HideAdvMergeRecords\u0026#34; Sequence=\u0026#34;12\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.QuickCampaign\u0026#34; Sequence=\u0026#34;12\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/QuickCampaign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/QuickCampaign_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.ToolTip.Description\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;CreateQuickCampaign\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.Selected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SelectedRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SelectedRecords_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.ToolTip.Description\u0026#34; ModernImage=\u0026#34;MultiSelect\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.AllCurrentPage\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.AllCurrentPage\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AllRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AllRecords_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.ToolTip.Description\u0026#34; ModernImage=\u0026#34;Letter\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.AllAllPages\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.AllAllPages\u0026#34; Sequence=\u0026#34;30\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AllRecordsAllPages_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AllRecordsAllPages_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.ToolTip.Description\u0026#34; ModernImage=\u0026#34;BrowseCards\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AssociateParentChildCase\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.AssociateParentChildCase\u0026#34; Command=\u0026#34;Mscrm.AssociateParentChildCase\u0026#34; Sequence=\u0026#34;13\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AssociateChildCase_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AssociateChildCase_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.MailMerge\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.MailMerge\u0026#34; Command=\u0026#34;Mscrm.MailMergeSelected\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/mailmerge16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/mailmerge32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.SubGrid.account.AddConnection\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddConnection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddConnection_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Connection\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddConnectionNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ModernImage=\u0026#34;ConnectionToOther\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddConnectionToMe\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionToMeGrid\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ModernImage=\u0026#34;ConnectionToMe\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddToQueue\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Actions_AddToQueue_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToQueue\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddToQueue_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddToQueue_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;AddToQueue\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Assign\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.MainTab.Actions.Assign\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Assign\u0026#34; Command=\u0026#34;Mscrm.AssignSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.MainTab.Actions.Assign\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Assign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;Assign\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Sharing\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Share\u0026#34; Command=\u0026#34;Mscrm.ShareSelectedRecord\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Share_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Sharing_32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; ModernImage=\u0026#34;Share\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CopySelected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CopyShortcut\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/copyshortcut16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/copyshortcut32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SendSelected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.RemoveSelectedRecordsFromEntity\u0026#34; Command=\u0026#34;Mscrm.RemoveSelectedRecordsFromEntity\u0026#34; Sequence=\u0026#34;90\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.FollowButton\u0026#34; Command=\u0026#34;Mscrm.SubGrid.FollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Follow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003.png\u0026#34; Sequence=\u0026#34;1010\u0026#34; ModernImage=\u0026#34;RatingEmpty\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.UnfollowButton\u0026#34; Command=\u0026#34;Mscrm.SubGrid.UnfollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Unfollow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003_u.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003_u.png\u0026#34; Sequence=\u0026#34;1030\u0026#34; ModernImage=\u0026#34;RatingFull\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Command=\u0026#34;Mscrm.FiltersGroup\u0026#34; Sequence=\u0026#34;40\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Controls\u0026#34;\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.Filters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveToCurrent\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Filters_SaveToCurrent_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToCurrentView\u0026#34; Command=\u0026#34;Mscrm.SaveToCurrentView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrent\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrentToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savefilters16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefilters32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveAsNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNew\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToNewView\u0026#34; Command=\u0026#34;Mscrm.SaveAsNewView\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNew\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveFilterAsNewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefiltersasview32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;50\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveAsDefaultGridView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Filters_SaveAsDefaultGridView_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveAsDefaultGridView\u0026#34; Command=\u0026#34;Mscrm.SaveAsDefaultGridView\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridViewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveViewAsDefault_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.Charts\u0026#34; Command=\u0026#34;Mscrm.Charts.Flyout\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Charts\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChartsToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ChartsBarGraph_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.Charts.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.Charts.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.Charts.Controls0\u0026#34;\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.ChangeLayout.LeftRight\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ChangeLayout\u0026#34; Command=\u0026#34;Mscrm.Charts\u0026#34; QueryCommand=\u0026#34;Mscrm.Charts.Query\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts.LeftRight\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayoutToolTip\u0026#34; /\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.ChangeLayout.Off\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ChangeLayout\u0026#34; Command=\u0026#34;Mscrm.Charts.Off\u0026#34; QueryCommand=\u0026#34;Mscrm.Charts.Query.Off\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts.Off\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayoutToolTip\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;70\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RunWorkflow\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.RunWorkflowSelected\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunWorkflow_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RunScript\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunScript\u0026#34; Command=\u0026#34;Mscrm.RunInteractiveWorkflowSelected\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/StartDialog_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/StartDialog_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;80\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ExportData\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.RunReport\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.RunReport\u0026#34; Command=\u0026#34;Mscrm.ReportMenu.Grid\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.ReportsMenu.Populate.Grid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunReport_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Report\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.DocumentTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DocumentTemplate\u0026#34; Command=\u0026#34;Mscrm.DocumentTemplate.Templates\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DocumentTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;15\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DocumentTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsExcelTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;DocumentTemplates\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.WordTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.WordTemplate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.WordTemplate\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.HomepageGrid.WordTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;16\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/WordTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsWordTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;WordTemplates\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcelOnline\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcelOnline\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcelOnline\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.Online\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcelOnline\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheetAll\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExportAll\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.AllStaticXlsx\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.StaticXlsx\u0026#34; Sequence=\u0026#34;42\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicWorkesheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.DynamicXlsx\u0026#34; Sequence=\u0026#34;43\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicPivotTable\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicPivotTable\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.PivotXlsx\u0026#34; Sequence=\u0026#34;44\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.ExportSelectedToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportSelectedToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportSelectedToExcel\u0026#34; Sequence=\u0026#34;230\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.FolderTracking\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;80\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.FolderTracking.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.FolderTracking\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.FolderTracking\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CRM_Activity_Command_FolderTracking_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CRM_Activity_Command_FolderTracking_16.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;FolderTrack\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/ContextualGroup\u0026gt; \u0026lt;/ContextualTabs\u0026gt; \u0026lt;/Ribbon\u0026gt; \u0026lt;/UI\u0026gt; \u0026lt;Templates\u0026gt; \u0026lt;RibbonTemplates Id=\u0026#34;Mscrm.RibbonTemplates\u0026#34;\u0026gt; \u0026lt;GroupTemplate Id=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Layout Title=\u0026#34;Large\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;OneRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;OneRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Medium\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Small\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Small\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Small\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Popup\u0026#34; LayoutTitle=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;/GroupTemplate\u0026gt; \u0026lt;/RibbonTemplates\u0026gt; \u0026lt;/Templates\u0026gt; \u0026lt;CommandDefinitions\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountOneOrTwo\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.VisualizationPaneNotMaximized\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.ShowOnNonModernAndModernIfAllowed\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HomepageGrid.account.MergeGroup\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.CanWriteAccount\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HybridDialogMergeEnabled\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;XrmCore.Commands.Merge.mergeRecords\u0026#34; Library=\u0026#34;$webresource:Main_system_library.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControl\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedEntityTypeName\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountAtLeastOne\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.VisualizationPaneNotMaximized\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotAListForm\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;Marketing.CommandActions.Instance.addToList\u0026#34; Library=\u0026#34;$webresource:Marketing/CommandActions/Marketing_CommandActions.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControl\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedEntityTypeCode\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;LinkedInExtensions.ViewOrgChartForGrid\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountExactlyOne\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.HideOnMobile\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.ShowOnlyOnModern\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.IsOrgChartFeatureEnabled\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.CanReadContact\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;LinkedInExtensions.Account.Instance.ViewOrgChartFromGrid\u0026#34; Library=\u0026#34;$webresource:LinkedInExtensions/Account/LinkedInExtensions_Account.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;/CommandDefinitions\u0026gt; \u0026lt;RuleDefinitions\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HomepageGrid.account.MergeGroup\u0026#34;\u0026gt; \u0026lt;MiscellaneousPrivilegeRule PrivilegeName=\u0026#34;Merge\u0026#34; /\u0026gt; \u0026lt;/DisplayRule\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.PrimaryEntityHasCampaignResponse\u0026#34;\u0026gt; \u0026lt;OrRule\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;account\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;contact\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;lead\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;incident\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;opportunity\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;quote\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;invoice\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;salesorder\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;contract\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;/OrRule\u0026gt; \u0026lt;/DisplayRule\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34;\u0026gt; \u0026lt;CrmOfflineAccessStateRule State=\u0026#34;Offline\u0026#34; InvertResult=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/EnableRule\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.Form.{!EntityLogicalName}.Developer\u0026#34;\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;{!EntityLogicalName}\u0026#34; /\u0026gt; \u0026lt;CustomRule FunctionName=\u0026#34;Mscrm.RibbonActions.formPageDeveloperTabEnableRule\u0026#34; Library=\u0026#34;/_static/_common/scripts/RibbonActions.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;PrimaryControl\u0026#34; /\u0026gt; \u0026lt;/CustomRule\u0026gt; \u0026lt;/EnableRule\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;/RuleDefinitions\u0026gt; \u0026lt;/RibbonDefinition\u0026gt; \u0026lt;/RibbonDefinitions\u0026gt; When developers modify the ribbon for a table, changes are applied over the default ribbon(s) and will typically sit underneath the CustomAction or HideCustomAction nodes. Developers must determine which area of the application they wish to modify by ensuring they target their changes to one of the areas highlighted above. Once modified, Ribbon changes will then take effect at the table level and be carried forward as part of any corresponding solution updates you make. Therefore, from a deployment standpoint, including your modified tables within your solutions will be sufficient to roll out your ribbon changes to other environments.\nUnderstanding Command Buttons, Rules and Actions Within our Ribbon definitions sit the various command buttons that get rendered to end-users in the application. Model-driven apps include many of these by default, covering typical actions we want to occur against a record - such as saving, reassigning it or deleting it from the system. Many of these command buttons will be contextual and only become visible if the user has specific privileges granted to them within the application. This behaviour can help in keeping the interface relevant and de-cluttered. As developers, we will typically work with the out of the box command buttons in two contexts, namely when we want to:\nToggle the visibility of a specific button. Override or replace a default command button action. For all other situations, we can then look to set up our custom command buttons. We have various options available here to tailor how this looks within the application - such as the title, display label across multiple languages, and its image. Once defined, we can then start to customise our new button further. There are two core concepts to grasp concerning this, so it\u0026rsquo;s helpful to understand (in detail) how they behave - namely, rules and actions.\nRules dictate the \u0026ldquo;state\u0026rdquo; of a button within the application. They can be used to define two specific action types - whether a command button is enabled or whether the rule is visible to users. For both types, you can determine their behaviour based on:\nWhether the user is accessing the ribbon via the Unified / Classic interface, the tablet app or via legacy areas of the application. Whether the user is accessing the application via a desktop browser or through the Outlook client. You can also determine their behaviour based on whether the Outlook client is in offline mode or if a user is working with a specific version of the Outlook client. The name of the table the user is currently working with. The state of the loaded form, e.g. Create, Read-Only etc. The user\u0026rsquo;s current table privileges within the application. When working with subgrids, the number of rows currently selected by the user. A value that is present on the currently loaded table form. The contents of the current URL the user has navigated to. For example, based on the above, you could modify a button in the application to only become visible to users who have the privilege to Read or Update a related table row. This list of available options will likely cover most of your requirements when changing the state and visibility of buttons within the application. For more complex needs, consider defining a Custom Rule using a JavaScript function instead.\nFinally, actions perform the desired behaviour for a command button during its OnClick event within the application. As part of this, developers can either execute bespoke logic via a JavaScript function or open a specific URL. This second option is great for when you need to link to an external application system and, because this action type supports custom parameters, it also allows you to build a dynamic URL as part of this. You can find further detail on the supported kinds of parameters on the Microsoft Docs website, but, to summarise, developers can use a range of different data types and even feed in values from the current form or view the user is accessing. And, as mentioned, actions can be added to default command buttons within the application to modify its default behaviour. However, it\u0026rsquo;s generally considered best practice to create a new command button and define a custom action for it that way.\nRibbon Workbench Overview Because model-driven apps store ribbon definitions as XML files, it\u0026rsquo;s difficult for developers to efficiently work with them and - most importantly - get a visual indication of how they will look post-deployment. As usual in these situations, the great Business Applications community comes to the rescue to help us along. In particular, we can thank Scott Durow for providing the fantastic Ribbon Workbench tool that does exactly what it says on the tin - lets us easily make changes to multiple entity ribbons within an easy to use interface. And, because Microsoft mentions this tool as part of the skills measured for this exam, it provides us with an excellent excuse to discuss it in detail. 😀\nTo download the tool, first, you need to grab a copy of the XrmToolBox, as the Ribbon Workbench is available as a plug-in within the ToolBox. You can also download it as a managed solution from Scott\u0026rsquo;s website, but the XrmToolBox version tends to be better to work with, I find. When running the tool for the first time, you will be prompted to select the solution that you wish to work with, as indicated below:\nWhen working with the Ribbon Workbench, a good tip is to set up a slimmed-down or temporary solution within your Dynamics 365 / Common Data Service, containing just the entities whose ribbon you need to modify. Indeed, this is something that the workbench now enforces for an excellent reason; doing so will help to speed up the export/import of your ribbon definitions as you make changes to them.\nWith your solution chosen, we are then greeted with a screen resembling the below:\nIt\u0026rsquo;s worth quickly explaining what each of the respective areas numbered above does in detail:\nWithin the toolbox, you can quickly add numerous different button types onto the ribbon. For example, you can add on a button that expands out into a selection containing multiple sub-options.\nWithin this area, you can view all of the bespoke ribbon customisations you have performed, the underlying XML definition for your changes and any warning/error messages relating to your changes.\nEach of the ribbon components you have customised will appear and can be expanded within the list here. We can also add new Commands, Display Rules and Enable Rules from this area.\nThis section shows all the ribbon buttons that appear by default in the Home area of the application. We can select each button to view its properties or right-clicked instead to copy or customise it.\nThis section shows all the ribbon buttons that appear by default when viewing multiple records via a subgrid or view. We can select each button to view its properties or right-clicked instead to copy or customise it.\nThis section shows all the ribbon buttons that appear by default when viewing a single record at the form level. We can select each button to view its properties or right-clicked instead to copy or customise it.\nThe properties area displays details about the currently selected component. Clicking any existing button will populate the properties area of the window, thereby allowing you to view and amend it accordingly. You can see an example of how the properties for the Account Save button looks below:\nOnce you have finished making changes, you can click the Publish button to import these into the application. Note that this can take several minutes to complete, as the workbench performs a full solution import as part of this.\nFor the components highlighted in 4, 5 and 6, also note that you have an option to toggle whether to apply your changes within the unified (UI) or classic interface. In most cases moving forward, UI should be the option you go for.\nHaving the Ribbon Workbench to hand whenever you need to perform the simplest of ribbon customisations is an absolute necessity. The tool will help streamline your development process and highlight issues that would be impossible to spot if you were modifying the raw XML definitions manually. Spending some time understanding how it works and, more crucially, how you can deploy simple ribbon customisations will be essential when tackling the PL-400 exam.\nUsage Cases for Ribbon Customisations Developers should always prepare themselves to identify situations where we will require ribbon customisations to meet a particular requirement within Power Apps. To help you grasp this, I\u0026rsquo;ve highlighted below a couple of scenarios where this may be applicable:\nIntegrate with an external application system by opening a new tab/window directly within a model-driven app. We could then extend this further by also populating the URL in question with information from the record itself to, for example, open a specific record in another system. Perform multiple, complex Web API or form-level operations via a single button press. Customise the Qualify Lead button\u0026rsquo;s behaviour by replacing it with custom logic that creates an Opportunity row only and not a new Account and Contact row. Hide or prevent users from performing specific actions, such as running on-demand workflows, changing Business Process Flows or activating rows that have already been closed. As always, when building solutions on top of the Power Platform, preference should be given towards a functional or \u0026ldquo;low code\u0026rdquo; solution wherever possible. This approach is particularly correct for ribbon customisations, given their potential to modify significantly how the system behaves by default and their propensity to break your model-driven apps if customised incorrectly.\nDemo: Using the Ribbon Workbench to Create a Simple Command Button To better understand how to use the Ribbon Workbench to add a new Ribbon command button to a model-driven app, check out the below YouTube video, where I demonstrate how to do this from start to finish:\nAs we have seen in the past couple of posts in this series, developers can do a surprising amount of tinkering around with the user interface of model-driven Power Apps and within canvas apps. In the next post in the series, we\u0026rsquo;re going to jump into the Extend the Platform area of the exam and show you how to develop C# plug-ins to perform simple or complex server-level actions targeting Microsoft Dataverse.\n","date":"2021-05-09T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-working-with-ribbon-command-buttons/","title":"Exam PL-400 Revision Notes: Working with Ribbon Command Buttons"},{"content":"Welcome to the thirteenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Previously, we dived into our first code related topic in the series by evaluating the capabilities within the client scripting model, accessed primarily via JavaScript. Client-side scripting is not the only potential usage case to trigger custom logic on a form or even override how default controls behave. Developers can implement Power Apps Component Framework (PCF) controls to completely alter how fields, views or other form-based objects operate. And, because they leverage modern web tools/languages, it\u0026rsquo;s easy to get a PCF control deployed out into any Power App. This feature is the focus of the next exam area we will look at in today\u0026rsquo;s post, with the expectation that candidates must demonstrate knowledge of the following subjects:\nCreate a Power Apps Component Framework (PCF) component\ndescribe the PCF component lifecycle\u0026lt; initialize a new PCF component configure a PCF component manifest implement the component interfaces package, deploy, and consume the component configure and use PCF Device, Utility, and WebAPI features test and debug PCF components by using the local test harness The development technologies involved as part of PCF components differ significantly from the areas those with previous experience of Dynamics CRM / 365 will likely know. Therefore, let\u0026rsquo;s jump in to review this in further detail and focus on demonstrating how to deploy a simple PCF control out into the platform.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam. I would also recommend that you have a good understanding of JavaScript, HTML/CSS, PowerShell, and working with Visual Studio before you attempt any of the examples provided in this post.\nPCF Controls Overview Stepping back into the world of Dynamics CRM for a few seconds, and we can see that our options for modifying the out of the box forms (now a central component within model-driven apps) are somewhat limited. Although developers have a high degree of control over the general layout of a form, our ability to modify how a text field is displayed remains virtually non-existent. Instead, developers would have to revert to building out HTML Web Resource files and then embed these within a form instead. No doubt, alongside this, there would be a whole heap of JavaScript interacting with the applications Web API to provide the illusion of a custom interface. And while it is still possible to implement solutions of this nature, it\u0026rsquo;s interesting to note that this entire subject does not even get a single mention as part of the PL-400 exam specification. I believe this should tell you all you need to know and the potential direction for this feature in future. 😉\nTo help address the traditional needs of Web Resources and, additionally, allow us to overhaul the appearance within a Power App, Power Apps Component Framework (PCF) controls are now here and entirely at our disposal. Because they take advantage of a whole heap of existing tools, developers can quickly bundle together re-usable custom components and deploy them multiple times across a model-driven app. These can alter the behaviour of the elements described earlier (fields \u0026amp; views) and support additional capabilities, such as accessing the applications Web API. Some of the advantages they have over traditional Web Resources include:\nThe ability to bundle all aspects of your project together into a single solution file. Because the application renders custom controls simultaneously as the form itself, users get a completely seamless experience using a model-driven app with PCF controls attached. They are developed using modern web technologies, such as NPM, Node.js and TypeScript. Now, PCF controls are great but are not necessarily the most accessible tool to implement. Developers should evaluate the requirements of the overall solution and, where possible, guide an organisation towards introducing a canvas Power App instead if that meets the requirements. To best illustrate how to make this decision, let\u0026rsquo;s consider the following two scenarios:\nThe organisation requires a highly tailored area within the Account form to create new Contact\u0026rsquo;s that will then be related to the parent record: Given there are numerous controls to implement, an embedded canvas app is the most straightforward solution. You\u0026rsquo;re managing a multi-lingual instance and need to ensure that all controls display in the user\u0026rsquo;s chosen language: Although this is possible to do via a canvas app, a PCF control is a more logical approach, given that it can hook into the applications localisation settings. The organisation has an existing Node.js/Angular web component on their website that they wish to replicate across into Dynamics 365: In this situation, a PCF control makes logical sense, as we can straightforwardly port this across to the application with minimal effort. Your solution needs to immediately trigger and return feedback from actions triggered via Power Automate flows: As it stands, only canvas apps support this capability. A good Power Platform developer can build out a PCF control without issue; the sign of an excellent developer is one who can identify the appropriate usage cases for themand not necessarily resort to using them as a first-preference solution if others can fit the gap.\nTechnology Areas Involved To a degree, PCF controls are an anomaly when you start looking at the technology areas that make them up. I would forgive you for assuming that they rely solely on .NET focused programming languages. Instead, they utilise some more recently developed frameworks targeted towards rapid web development. This circumstance can be a potential blocker when traditional .NET developers start learning about PCF controls for the first time, but it is a hump that is worth surmounting. The below list covers all of the languages and tools that help bring PCF controls together. It is impossible to provide a detailed overview of the inner workings of each of these as part of a single blog post, so I encourage you to perform separate revision in these areas; what follows, therefore, is a basic overview:\nHTML: HyperText Markup Language (HTML) makes up everything to do with the web - including the very page you are reading now. It provides a structured, open standard that bears some similarities to eXtensible Markup Language (XML). It will typically use other components, such as CSS or JavaScript, to provide enhanced styling or functional capabilities to an individual web page. From a PCF perspective, you would reference and work with common HTML constructs, such as divs, inputs and options, when building out your components. Alternatively, you could look to leverage established frameworks such as Fluent UI, which has the advantage of removing a lot of headaches when styling and building out your interfaces. CSS: Whereas HTML can be best thought of as the \u0026ldquo;nuts and bolts\u0026rdquo; of the internet, Cascading Style Sheets (CSS) is the bit that makes it all look pretty and beautiful. Conforming to a JavaScript Object Notation (JSON) like structure, it allows you to apply a variety of different style rules to individual components that get loaded onto a webpage. For example, you can define whether a button changes colour when a user hovers their mouse over a control. CSS is used within PCF controls to help style and make your controls \u0026ldquo;pretty\u0026rdquo; for users within the application and may be necessary depending on your requirements. For example, you might need to ensure that your component mirrors an organisations branding guidelines. TypeScript: TypeScript is an open-source language developed by Microsoft, with a lot of similarities to JavaScript. The main benefit it provides, hinted at by its name, is that it\u0026rsquo;s a strongly typed language. This means it has full Intellisense support within Visual Studio/Visual Studio Code, allowing you to identify and fix issues quicker. From a compiler standpoint, all code written ultimately executes as JavaScript on a webpage. Our TypeScript files will contain all of the logic and core functionality of a PCF control. NPM: Node Package Manager (NPM) is used within PCF controls to retrieve common, pre-built libraries/packages of functionality so that you do not necessarily have to reinvent the wheel when developing your code component for the first time. For example, rather than building a calendar control from scratch, you could use FullCalendar by downloading its appropriate NPM packages and then referencing the components you need within TypeScript. A traditional Dynamics CRM Developer will perhaps be most familiar with the first two languages on this list. It is, therefore, essential to spend some time learning about TypeScript and NPM to ensure that you can effectively build out a practical PCF component. Thankfully, TypeScript is very similar to JavaScript - another language that developers will need to know well for this exam - so I believe developers of this inclination will have little difficulty making the jump across to TypeScript.\nDevelopment Pre-Requisites Before you can start writing a single line of code, you need to ensure you have ticked a few boxes on your machine:\nYou must have either NPM or Node.js installed on your computer. I recommend using NPM, as this includes Node.js as standard. During installation, be sure to tick the option to download additional tools, one of which installs Visual Studio 2017 Build Tools. After this has installed, be sure to go into the Visual Studio 2017 installer and add on the individual component NuGet targets \u0026amp; Build Tasks, an additional pre-requisite. You must also have the .NET Framework 4.6.2 Developer Pack installed on your machine. Finally, we must install a specific CLI for developing PCF controls. The CLI will provide a range of commands to build, test and deploy your components as you build them out. You can find further details on all these setup steps on the Microsoft Docs website.\nYou must also make an important decision regarding which integrated development environment (IDE) you wish to use to develop your control - either Visual Studio 2017 (or later) or Visual Studio Code. Although the latter option requires some tinkering, I recommend it over traditional Visual Studio if possible, as it provides a far more streamlined development experience. Also, note that you will need to install the .NET Core 3.1 SDK if using Visual Studio Code.\nOnce you have installed all pre-requisite components, you can then initialise and create your first PCF component project. You can follow the steps outlined in this article to get started.\nReviewing the PCF Component Manifest Just as solutions have high-level properties/metadata that summarises your particular piece of functionality, a PCF component has a manifest that defines general properties relating to your component. Comprised of a single file, called ControlManifest.Input.xml, the manifest looks like this when creating a new component targeting a field:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34; ?\u0026gt; \u0026lt;manifest\u0026gt; \u0026lt;control namespace=\u0026#34;jjg\u0026#34; constructor=\u0026#34;PL400Sample\u0026#34; version=\u0026#34;0.0.1\u0026#34; display-name-key=\u0026#34;PL400Sample\u0026#34; description-key=\u0026#34;PL400Sample description\u0026#34; control-type=\u0026#34;standard\u0026#34;\u0026gt; \u0026lt;!-- property node identifies a specific, configurable piece of data that the control expects from CDS --\u0026gt; \u0026lt;property name=\u0026#34;sampleProperty\u0026#34; display-name-key=\u0026#34;Property_Display_Key\u0026#34; description-key=\u0026#34;Property_Desc_Key\u0026#34; of-type=\u0026#34;SingleLine.Text\u0026#34; usage=\u0026#34;bound\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;!-- Property node\u0026#39;s of-type attribute can be of-type-group attribute. Example: \u0026lt;type-group name=\u0026#34;numbers\u0026#34;\u0026gt; \u0026lt;type\u0026gt;Whole.None\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;Currency\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;FP\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;Decimal\u0026lt;/type\u0026gt; \u0026lt;/type-group\u0026gt; \u0026lt;property name=\u0026#34;sampleProperty\u0026#34; display-name-key=\u0026#34;Property_Display_Key\u0026#34; description-key=\u0026#34;Property_Desc_Key\u0026#34; of-type-group=\u0026#34;numbers\u0026#34; usage=\u0026#34;bound\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; --\u0026gt; \u0026lt;resources\u0026gt; \u0026lt;code path=\u0026#34;index.ts\u0026#34; order=\u0026#34;1\u0026#34;/\u0026gt; \u0026lt;!-- UNCOMMENT TO ADD MORE RESOURCES \u0026lt;css path=\u0026#34;css/MB400Sample.css\u0026#34; order=\u0026#34;1\u0026#34; /\u0026gt; \u0026lt;resx path=\u0026#34;strings/MB400Sample.1033.resx\u0026#34; version=\u0026#34;1.0.0\u0026#34; /\u0026gt; --\u0026gt; \u0026lt;/resources\u0026gt; \u0026lt;!-- UNCOMMENT TO ENABLE THE SPECIFIED API \u0026lt;feature-usage\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureAudio\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureImage\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureVideo\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.getBarcodeValue\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.getCurrentPosition\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.pickFile\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Utility\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;WebAPI\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/feature-usage\u0026gt; --\u0026gt; \u0026lt;/control\u0026gt; \u0026lt;/manifest\u0026gt; The properties in the control node are specified when creating your project for the first time using the pac pcf init command. Typically, the only values in this node you would change is the version, display-name-key and description-name key values. You should update the version value each time you deploy an updated version of your component.\nThe manifest includes some additional nodes, which are worth reviewing in further detail:\nproperty: Here, you define the various input properties that the component relies on and will be ultimately bound to when utilised. You can only tie PCF controls to a limited range of field types within the application, so it\u0026rsquo;s essential to know the limitations relating to this. For example, at the time of writing this post, it is impossible to bind a control to a lookup, Status or Status Reason field. You have additional options to specify whether a property requires a value and whether the component should utilise a default value if none is specified. A PCF component can have multiple property nodes defined for it. resources: Within this node, you must list all files that the component relies on to function correctly - whether this is additional HTML, CSS, or even other file types, such as images. Only one TypeScript file can be listed here, but you can have as many of all different file types as your component needs to function correctly. feature-usage: Here, you can enable/disable additional features that your component can utilise within a model-driven app. By default, all of these are disabled, so you will need to uncomment the specific ones you need. Note, in particular, that you must explicitly enable the WebApi for use in your component before you can start calling its appropriate list of methods. Just as you need to spend some time updating your solution\u0026rsquo;s properties each time you release it, you should anticipate working in the manifest often. For the exam, having a good awareness of the manifest schema and how to enable/disable various properties within your code component should be sufficient.\nEnd to End Development Cycle: Building, Debugging and Deploying a PCF Control Microsoft provides an excellent online tutorial that talks you through the steps involved to build your very first PCF Control, so there is no point in me repeating this verbatim. However, I sometimes find the best way to understand a process is to see it in action. With this in mind, check out the video below, where I will guide you through all the steps involved to deploy out your very first PCF control into a model-driven app. The video will also show you how to debug your code component using the Power Apps Component Framework Test Environment:\nWeb API As mentioned earlier, PCF controls can interact with the Microsoft Dataverse Web API to perform platform-level operations. Like JavaScript, we do not need to worry about authentication when accessing the Web API in this way, but the list of available methods (at the time of writing this post) is significantly less. The current list of available operations is as follows and covers all basic CRUD operations targeting the application:\ncreateRecord deleteRecord retrieveMultipleRecords retrieveRecord updateRecord This state of affairs means that if, for example, you need to perform operations such as an Execute request, you would have to instead resort to a solution that uses client-side scripting instead.\nUsing PCF Controls within Canvas Apps and Portals As noted earlier, PCF controls are supported within canvas Power Apps as well. This capability allows developers to utilise an existing PCF control within their canvas apps straightforwardly without rewriting their code. To find out more about how to get started with this feature, check out the following Microsoft Docs article. It\u0026rsquo;s also worth noting that PCF controls will be coming to Power Apps portals in the future, given this functionality is currently in preview. This topic may (eventually) be assessed as part of the PL-400 exam.\nFurther Tools / Resources I\u0026rsquo;ve linked to numerous Microsoft Docs articles in this post surrounding PCF control development, and these should always be your first port of call when getting started on the subject. I would also recommend that you check out the following other resources too:\nPCF Gallery: A free community website operated by Guido Preite, this site is an excellent resource for finding free PCF controls that you can utilise or experiment further with. Dynamics Ninja Blog: Ivan Ficko has blogged, delivered many sessions relating to PCF controls, and has numerous examples that he\u0026rsquo;s created. If you\u0026rsquo;re looking to go the next step with your PCF development, he is your man! Todd Baginski\u0026rsquo;s YouTube Channel: Todd has also done videos on developing a PCF control and debug your components, which I would recommend watching. Dianamics PCF Lady: Diana Birklebach\u0026rsquo;s blog is an excellent resource for when you need to dive deep into the technical inner workings of PCF controls. Diana\u0026rsquo;s knowledge of PCF controls is staggering! 🤯 Professional PCF for Model-Driven Apps: You should already be aware of the fantastic work that Scott Durow has done for the Business Applications community. This (paid) course provides a unique opportunity to learn how to build an effective and professional PCF control for your apps and is well worth the investment. PCF controls are an exciting new area within the Power Platform. However, they are also a feature that is constantly changing. Be sure to refer to the latest documentation, as Microsoft releases new capabilities for PCF controls all the time. Next time, we round off our discussion on extending user interfaces by looking at how to set up a custom command button within a model-driven Power App.\n","date":"2021-05-02T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-creating-a-power-apps-component-framework-pcf-control/","title":"Exam PL-400 Revision Notes: Creating a Power Apps Component Framework (PCF) Control"},{"content":"I have a confession to make. In my five years working with Dynamics CRM, its successor, Dynamics 365 Customer Engagement (CE) and, more recently, the Power Platform and Microsoft Dataverse, I have never once set up, worked with or leveraged custom actions. Of course, I had a sketchy awareness of what they could do (as that\u0026rsquo;s the only way I would\u0026rsquo;ve been able to pass exams! 😏) and had heard multiple people extolling their virtues. But I had never once been tempted to look at them further or, perhaps more crucially, found a customer requirement that I thought they could be a good match for. Notwithstanding all of this, they remain an undoubtedly powerful tool for developers looking to craft their own bespoke Messages within a Dynamics 365 CE / Microsoft Dataverse environment, which can then support their own set of input or output parameters. These messages can then be called via code or by functional specialists via classic workflows or Power Automate flows. Impressive stuff, to be sure.\nSo with the recent hullabaloo regarding Custom API\u0026rsquo;s, I committed to embracing and learning more about them, if only to make up for my previous stubbornness against custom actions. Now, I can finally see what all the fuss is about. 😁 We can best think of Custom API\u0026rsquo;s as a new and improved variant of custom actions, and they do share a lot of the same capability. However, they have a few specific advantages that may make them preferable to utilise:\nIt is possible to define a set of privileges a user must have before calling a Custom API. Custom Actions have no limitation and, therefore, could be subject to users executing them without regard to a formal process or any restrictions on their user account. Custom API\u0026rsquo;s can be marked as private, thereby preventing other users or developers from using them. Custom Actions must always be public, in comparison. Unlike Custom Actions, Custom API\u0026rsquo;s support more modern features, including creating OData functions, binding an operation to table collection and providing localised labels covering different languages. Custom actions very much remain a supported and acceptable approach for you to adopt. But I would urge developers to review the capabilities within Custom API\u0026rsquo;s first, and if they can meet your particular needs, use them over custom actions as much as possible.\nI recently built out my first Custom API and had a requirement to call this via a JavaScript function tied to a Ribbon button. Straightforward stuff if you are using a custom action - call the Xrm.WebApi.online.execute method and refer to the convenient sample Microsoft provide for such a situation. However, looking at the documentation, it is unclear how you would do the same for a Custom API. As it turns out, the process is remarkably similar to custom actions, which might not be too surprising. Let\u0026rsquo;s assume we have a Custom API setup that looks a little something like this:\nWhich has three request parameters defined like so\nSampleGUID: Setup with a type of Guid SampleBoolean: Setup with a type of Boolean SampleString: Setup with a type of String And a single output (response) parameter - EntityID - which, in this case, we assume links back to a record that our Custom API will create. We can now use code similar to the below to call the API and pass in the required parameters:\nif (typeof (CRMChap) === \u0026#39;undefined\u0026#39;) {var CRMChap = {__namespace: true};} var Sdk = window.Sdk || {}; /** * Sample custom API properties * @param {boolean} sampleBoolean - Sample data value of type boolean * @param {GUID} sampleGUID - Sample data value of type GUID * @param {string} sampleString - Sample data value of type string */ Sdk.jjg_SampleCustomAPI = function(sampleBoolean, sampleGUID, sampleString) { this.SampleBoolean = sampleBoolean; this.SampleGUID = sampleGUID; this.SampleString = sampleString; }; // NOTE: The getMetadata property should be attached to the function prototype instead of the // function object itself. Sdk.jjg_SampleCustomAPI.prototype.getMetadata = function () { return { boundParameter: null, parameterTypes: { \u0026#39;SampleGUID\u0026#39;: { \u0026#39;typeName\u0026#39;: \u0026#39;Edm.Guid\u0026#39;, \u0026#39;structuralProperty\u0026#39;: 5 // Entity Type }, \u0026#39;SampleBoolean\u0026#39;: { \u0026#39;typeName\u0026#39;: \u0026#39;Edm.Boolean\u0026#39;, \u0026#39;structuralProperty\u0026#39;: 1 // Primitive Type }, \u0026#39;SampleString\u0026#39;: { \u0026#39;typeName\u0026#39;: \u0026#39;Edm.String\u0026#39;, \u0026#39;structuralProperty\u0026#39;: 1 // Primitive Type } }, operationType: 0, // This is an action. Use \u0026#39;1\u0026#39; for functions and \u0026#39;2\u0026#39; for CRUD operationName: \u0026#39;jjg_SampleCustomAPI\u0026#39;, }; }; CRMChap.RibbonFunctions = { callCustomAPISample: function (formContext) { \u0026#39;use strict\u0026#39;; //Get current record ID (i.e a GUID we can pass) var guid = formContext.data.entity.getId(); //Call the Custom API\tvar jjgSampleCustomAPIRequest = new Sdk.jjg_SampleCustomAPI(false, entityID, \u0026#39;This is a test\u0026#39;); Xrm.WebApi.online.execute(jjgSampleCustomAPIRequest).then( function (result) { result.json().then( function (response) { //TODO: Add your logic here. In this case, we may want to read the output parameter defined using the following code: var entityID = response.EntityID; } ); }, function (error) { Xrm.Navigation.openErrorDialog({ details: error.message, message: \u0026#39;An error occurred while calling the jjg_SampleCustomAPI Custom API.\u0026#39;}); } ); }, __namespace: true }; As we can see, the approach is virtually indistinguishable from what we would follow when working with custom actions. Which I think makes it even more of a good reason for developers to check them out, start using them and unlock some of the benefits they can bring to the table.\n","date":"2021-04-25T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/calling-custom-apis-via-javascript-form-ribbon-functions-microsoft-dataverse-power-apps/","title":"Calling Custom API's via JavaScript Form / Ribbon Functions (Microsoft Dataverse / Power Apps)"},{"content":"Welcome to the twelfth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last time, we finished our discussion regarding the various business process automation tools in the Power Platform by reviewing Business Rules, and Business Process Flows. Today, we take a look at our first topic that involves us writing code. 🤓 This delay may seem strange for an exam aimed towards developers, but it underlines the importance of considering and leveraging functional solutions, as opposed to technical solutions, in the Power Platform, wherever possible. The following exam area we look at today is titled Extend the user experience, which has a total weighting of 10-15% in the exam and whose first subject area covers the following topics:\nApply business logic using client scripting\ncreate JavaScript or Typescript code that targets the Client API object model register an event handler create client-side scripts that target the Dataverse Web API Learning how to write code using JavaScript or Typescript, or even cover off every single method/function exposed within a model-driven apps Web API, would be impossible as part of a single blog post. Therefore, we will instead focus on the fundamental aspects unique to the Power Platform, with specific reference towards the steps involved to deploy out a form function successfully. I, therefore, recommend you have a good general awareness of the fundamental principles behind JavaScript or TypeScript before reading this post any further. And, as with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nForm-Side Scripting: Why and When We Need It In an earlier post in the series, we discussed the usage cases and features available as part of Business Rules. It\u0026rsquo;s important to reference back to this for two reasons. First of all (and did you know?), underneath the hood, Business Rules implement many of the form-side scripting features available to developers using JavaScript or TypeScript. Therefore, much of the functionality they can achieve - showing/hiding fields, changing their business requirement, etc. - is also available to developers writing form scripts. Secondly, you should always fully utilise and exhaust the capabilities of Business Rules before contemplating writing a single line of code. They provide you with a fully supported and far more straightforward way of accomodating basic requirements relating to form presentation and automating basic tasks.\nImpressive though they are, there will be occasions where a Business Rule is just not going to cut it. Consider the following requirements:\nYou need to dynamically change the display label of a field based on whether a particular value exists on the current record. The general structure of the form needs to be modified, depending on what type of form is loaded (i.e. a new record form, a read-only form, etc.). You need to execute a Web API query to obtain details from another record in the system. When a user saves a record, you need to perform some additional validation and, if required, prevent the save action from occurring and present an error to the user. For these \u0026ldquo;advanced\u0026rdquo; scenarios, it\u0026rsquo;s impossible to utilise a Business Rule effectively to meet the requirements, meaning we must instead resort to writing custom code. Again, and I cannot stress this enough, your typical development workflow when evaluating what the business is asking for is first to review and confirm, without a shadow of a doubt, that you cannot address the requirement via a Business Rule; once you have done this, you then have my (and indeed Microsoft\u0026rsquo;s) blessing to start typing code 😉\nWhat is the Web API? To help with automating critical operations when working within the application and communicating into the application from an external system, Microsoft provides us with an OData version 4 compliant endpoint, through which developers can execute a variety of HTTP requests. As well as exposing key CRUD (Create, Read, Update and Delete) operations, developers can also use the Web API to execute batch operations, impersonate another user or call functions or actions. Developers can use any language of their choice to interact with the Web API when being called externally from the application. An example of a request to create a new Account record, provided courtesy of Microsoft, can be seen below:\nPOST [Organization URI]/api/data/v9.0/accounts HTTP/1.1 Content-Type: application/json; charset=utf-8 OData-MaxVersion: 4.0 OData-Version: 4.0 Accept: application/json { \u0026#34;name\u0026#34;: \u0026#34;Sample Account\u0026#34;, \u0026#34;creditonhold\u0026#34;: false, \u0026#34;address1_latitude\u0026#34;: 47.639583, \u0026#34;description\u0026#34;: \u0026#34;This is the description of the sample account\u0026#34;, \u0026#34;revenue\u0026#34;: 5000000, \u0026#34;accountcategorycode\u0026#34;: 1 } Typically, a developer may use tools such as Postman when building out their sample requests, as this provides some useful options to ease you along.\nWithin the context of developing client-side scripts, Microsoft provides a shorthand mechanism of working with the Web API to carry out everyday functions. Although the methods exposed here are not extensive compared with dealing with the Web API directly, developers do not need to worry about authentication when working with the Web API in this way. Therefore, you should arguably be able to accommodate most requirements using Xrm.WebApi.\nPutting together these types of requests can be tedious and take some time to build each time. Fortunately, there is a great community tool available from Jason Lattimer, called the CRM REST Builder. The builder provides a graphical interface that you can use to build your code snippets each time and then test them within the browser. You can see a screenshot of it below, where I\u0026rsquo;ve built out a sample request to query some Contact records and how the resulting code snippet looks when generated:\nXrm.WebApi.online.retrieveMultipleRecords(\u0026#34;contact\u0026#34;, \u0026#34;?$select=fullname\u0026amp;$filter=fullname ne null\u0026amp;$orderby=fullname asc\u0026#34;).then( function success(results) { for (var i = 0; i \u0026lt; results.entities.length; i++) { var fullname = results.entities[i][\u0026#34;fullname\u0026#34;]; } }, function(error) { Xrm.Utility.alertDialog(error.message); } ); For the exam, having a good awareness of the various operations you can perform against the Web API, the format of requests, how responses back are formatted and, finally, how to write OData queries targeting the Web API endpoint will hold you in good stead.\nexecutionContext: Attention Dynamics CRM Developers! Those with a previous background developing for on-premise Dynamics CRM deployments should take particular note here. In earlier versions of this application, developers would be most familiar with the various Xrm methods to perform common actions. For example, the following form function would allow you to change the display labels on a composite address control using the Xrm.Page.getControl method:\nfunction changeAddressLabels() { Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } In early 2020, Microsoft announced that the Xrm methods, including the one referenced above, are now deprecated. Also, attempting to use these methods within model-driven apps will likely cause errors. To get around this, developers can now take advantage of the Client API form context object, accessible from within any form. By using this, we can rewrite the above code to something that will be fully supported moving forward:\nfunction changeAddressLabels(executionContext) { //Get formContext var formContext = executionContext.getFormContext(); //Check to see if the control is on the form and, if so, rename it accordingly. if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;))\tformContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } Microsoft has not yet announced a timeline for eventually removing the Xrm methods permanently from the application. Still, it would be best not to use it as part of any new projects or work moving forward.\nExposed Event Handlers Understanding the various event handlers we can \u0026ldquo;hook\u0026rdquo; into when developing client-side scripts is crucial. Event handlers summarise a particular action a user carries out against a form - saving a record, changing the value of a field or opening/expanding a form tab. As users carry out each of these actions, developers can then execute their desired logic within their code. At the time of writing this post, the following event handlers are made available for use:\nOnChange: Each attribute (column) that exists on a form supports this event handler, which triggers as soon as a user changes a value and clicks on another control. Although it\u0026rsquo;s possible to call a function with your logic directly using the OnChange event, Microsoft advises that you use the addOnChange or removeOnChange methods to add or remove form functions, respectively. OnLoad: Rather self-explanatory perhaps, but as soon as a form or its data has fully loaded for a user, you can then call function(s) to perform your desired logic. It\u0026rsquo;s worth noting that there are technically two separate OnLoad events - one that fires as soon as the form itself loads and a second once all underlying data loads for the first time or refreshed/updated. Again, Microsoft provides specific functions, such as formContext.data.addOnLoad(), to allow you to bolt on or remove your logic accordingly. OnSave: Again, no prizes for guessing what this does 😀 Developers can bolt on their custom logic, using the addOnSave and removeOnSave methods whenever a user or some custom code attempts to save the record. It\u0026rsquo;s worth thoroughly reading and understanding all of the potential situations under which this can occur, including ones automatically determined by the systems auto-save functionality. A robust capability with this event handler is the ability to prevent the save action from completing when a user violates a particular business condition. PreSearch: This event handler is limited to lookup field controls only. You can only use this in conjunction with a single method - addCustomFilter - to dynamically alter the list of records a user can select. OnResultOpened: Limited for use within the knowledge management area of Dynamics 365 Customer Service, you can use this event handler to execute functions when a user opens a knowledge article from the search box or via a pop-out action. Similar to all other event handlers, you are provided with a function to add and remove functions to execute. OnSelection: Again, this is a specific knowledge management event handler for when a user selects a knowledge base article with the appropriate methods to add and remove functions. PostSearch: Finally, and once again, specifically concerning knowledge management capabilities, you can use this event handler to execute custom logic as soon as results return via a knowledge article search. And - you guessed it - the addOnPostSearch and removeOnPostSearch methods allow you to control when and where your custom functions execute. As you can see, all event handlers support the ability to add/remove one or multiple functions that the platform will then execute accordingly. This should be your preferred mechanism to use at all times.\nDevelopers are free to add as many as 50 event handlers for each event that occurs on a form; however, I would caution any solution that utilises so much custom code on one form. If you find yourself in this situation, I would encourage you to instead look at other options, such as a canvas Power App or a Power Apps Component Framework (PCF) control. More on this subject in the next post in this series 🙂\nTo find out more about event handlers and how they work, consult the following Microsoft Docs article.\nDeploying a Form Script So knowing how to write JavaScript form functions and having a good awareness of the various event handlers that are exposed gets you pretty much there and ready to start building out your first form script. However, you need to first understand the importance of Web Resources as part of all this. For a long time now, Web Resources have provided developers with the mechanism to deploy out several different types of custom components - whether they be images, HTML files and, as you might expect, JavaScript files. A typical deployment process for a new JavaScript file will involve the following steps:\nNavigate to your target solution within the Power Apps portal. Select New -\u0026gt; Other -\u0026gt; Web Resource. The New Web Resource window will then load within the \u0026ldquo;classic\u0026rdquo; interface. Provide a helpful name and description value for the new Web Resource, and ensure the Type is set to Script (JScript). The Text Editor button should appear. After pressing the Text Editor button, type in or copy/paste your JavaScript into the window and press OK. Save and then publish the Web Resource. Although you have now successfully uploaded your JavaScript file into the application, it will not be doing anything in this state. We must next navigate to the table form where we would like it to be triggered from and add on the Web Resource discussed in the previous steps under the Form libraries area on the left-hand side of the screen:\nNext, we can then select the Events tab on the right-hand side of the screen to attach our function to our desired event:\nHere, we must specify several options:\nThe name of the library, i.e. the Web Resource. The name of the function to call. If the function is consuming the execution context, then the Pass execution context as first parameter option must be ticked. If any additional static parameter values need specifying for the form function, you can also define these here as a comma-separated list. If we want to enforce dependencies between the function and the fields it relies on, we can do this by navigating into the \u0026ldquo;classic\u0026rdquo; form designer instead. Doing so will prevent other users from accidentally removing these fields from the form.\nAfter we then publish the form with all the latest changes, the newly created form function will then start triggering when the appropriate event handler occurs - nice!\nCommon Form Functions It is impossible to go into detail regarding every single client-side function that you can utilise. Instead, I wanted to highlight some of the more commonly used ones that you may find yourself using often. I\u0026rsquo;ve deliberately chosen to exclude any function(s) that can be accomplished via a Business Rule instead, for the reasons I\u0026rsquo;ve already alluded to earlier in this post.\nformContext.data refresh: As well as allowing you to refresh all data currently loaded onto a form, you can also optionally save the current row as part of the same action. For this reason, it is far more versatile and a preferred option when compared with save. formContext.data.entity getEntityReference: This allows you to capture a lookup (array) object containing details of the currently loaded row. This function is useful if you wish to store details about the current record locally to then populate as part of a lookup field later on. getId: Returns the Globally Unique Identifier (GUID) value of the currently loaded record. formContext.data.process setActiveProcess: This allows you to change the currently selected Business Process Flow (BPF) to a different one, provided the user has access to it. moveNext / movePrevious: Both of these functions allow you to move the user forwards or backwards on a BPF forcibly formContext.ui getFormType: Returns a value indicating the type of form the user is currently on. For example, you can determine whether a user is creating a row, updating an existing one or viewing a row that exists in a read-only state. setFormNotification: This lets you display an informational warning or error message to a user. Use of this function is generally preferred instead of alert(), particularly given that it comes with some nice options. formContext.ui.formselector: Contains three functions - getId, getLabel and navigate - which, when used in conjunction, allows you to change which form is presented to an end-user dynamically. formContext.ui.process setVisible: Allows you to toggle the visibility of a BPF on a form. formContext.ui.tabs setDisplayState: Allows you to toggle whether a tab is shown or collapsed on the form. formContext.ui.sections setVisible: Using this, you can determine whether a form section remains visible to a user or not. Xrm.WebApi retrieveRecord: In situations where you need to validate information on a related row within the application, you can use this function to return details regarding this, using an OData system query. To do well in the exam, you need to have a broad understanding of all potential form scripting capabilities, so I urge you to study the complete list of available functions in greater detail and experiment further with their usage.\nDemo: Deploying a Basic JavaScript Form Function In the video below, we\u0026rsquo;ll take a pre-authored JavaScript form function and demonstrate how this can be deployed out and debugged within the application:\nThe sign of an excellent Power Platform developer is when they use JavaScript / TypeScript form functions appropriately, after exhausting all other available options, such as Business Rules or Power Automate. Take care not to always resort to a code-first solution when building on top of the Power Platform. Next in the series, we\u0026rsquo;ll see how we can use the Power Apps Component Framework (PCF) to extend our Power Apps further.\n","date":"2021-04-18T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-implementing-client-side-scripting-on-model-driven-power-apps/","title":"Exam PL-400 Revision Notes: Implementing Client-Side Scripting on Model Driven Power Apps"},{"content":"Welcome to the eleventh post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last week, we took a look at the capabilities on offer as part of Power Automate flows. We now look to finish up our discussion of the exam area Configure business process automation by evaluating the following topics:\nImplement processes\ncreate and configure business process flows create and configure business rules create, manage, and interact with business process flows by using server-side and client-side code troubleshoot processes The tools referenced above - Business Process Flows and Business Rules - are perhaps most advantageous from a client-side perspective. They also have powerful features that can address common automation scenarios for actions performed at the server level. Let\u0026rsquo;s dive in and see what they are all about!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nBusiness Process Flows Overview In our discussion around model-driven Power Apps, we highlighted that these are, in effect, data-driven applications. As such, following an approved business process within them becomes necessary. Achieving this objective ensures you can guide end-users towards the correct outcome for, let\u0026rsquo;s say, a sales process and guarantee the population of accurate data to progress to the next stage. Consider another scenario involving a case management process; a service desk manager needs to ensure that a case proceeds according to any agreed SLA\u0026rsquo;s with an end customer and, where appropriate, they must monitor how long a Case has resided within a specific stage. Without the ability to comfortably accommodate these scenarios, it becomes incredibly challenging to meet the business and end customers\u0026rsquo; expectations.\nBusiness Process Flow\u0026rsquo;s (or BPF\u0026rsquo;s) aim to address these concerns by allowing developers to model out and enforce a business process effectively. You can then apply this at the table level within a model-driven Power App. As part of this, we can tailor many aspects of either an existing or new BPF and integrate the tool alongside other features within the Power Platform. The screenshot below illustrates a BPF called the Lead to Opportunity Sales Process, which is associated with the Lead table:\nUsers have access to the following features when using a BPF, indicated by the numbers above:\nHere, the user can view details about the name of the BPF and also how long the current process has been active; in this case, for around five months. A BPF is structured by stages, which the user can click on to view the further details required to proceed to the next step. In this case, having expanded the Qualify stage, the user is prompted to provide additional information, such as Estimated Budget and details of any existing Contact or Account row. Users can expand any active/inactive stage to review the details required. The stage which is coloured is the current, active stage in the process, which, in this case, would be the Qualify stage. We can also see the amount of time the BPF has resided in this particular stage. Users have several options underneath the Process dropdown field on the ribbon. You can choose to switch to another available process or even abandon the current BPF entirely. Abandoned BPF\u0026rsquo;s will be marked clearly within the application, as indicated in the screenshot below: The arrows at either end of the BPF allow you to toggle the current, focused stage. Note that they do not move the process to the next step but will enable you to preview the details needed as part of the next/previous stage. Some other valuable features regarding BPF\u0026rsquo;s are worth highlighting at this stage:\nBPF\u0026rsquo;s can span multiple tables if required. In the example shown earlier, the BPF is designed to \u0026ldquo;crossover\u0026rdquo; from the Lead to the Opportunity table as you progress through each stage. Because BPF\u0026rsquo;s integrate alongside security roles, it is possible to dictate which process applies for a subset of users within the application. Most system tables are enabled for use alongside BPF\u0026rsquo;s, with some exceptions. Consult the documentation for further details. There are no restrictions on their usage for custom tables. BPF\u0026rsquo;s have some specific limitations, namely: A table can only have ten active BPF\u0026rsquo;s at any one time. Consider deactivating any BPF\u0026rsquo;s that are no longer in use should you hit this limit. A BPF is limited to a maximum of 30 stages. Although BPF\u0026rsquo;s have full multi-table support, you are limited to using a maximum of 5. All in all, BPF\u0026rsquo;s are incredibly easy to use and, as we will see next, we can also create them with startling ease.\nBusiness Process Flow Designer Similar to the model-driven app designer, we have an interactive editor available to create or modify a BPF, illustrated below:\nAgain, I have numbered each relevant section and provided a description below regarding its purpose:\nExpanding the arrow here will allow you to modify the name and description of the BPF, alongside details regarding some of its fundamental properties (owner, Primary Entity etc.) The icons here let you quickly add components to your BPF or perform standard operations, such as cut, copy, paste and delete. On these, it is worth highlighting that the familiar keyboard shortcuts for each action will also let you perform it. Finally, you can use the Snapshot button to download a .png image file of the BPF, which you can then include as part of any documentation or training materials. From here (in order left to right), you can save your BPF, validate it, save a new copy, activate or modify its display order for users, grant/deny access to it or access some of the available help articles for the Power Platform. Expanding the ellipses will also allow you to share an email link for the BPF, show any associated dependencies or view specific properties relating to it. These options let you zoom in/out and adjust the view to fit the visual editor\u0026rsquo;s canvas. As discussed earlier, a BPF can contain several stages, represented like this on the canvas view. By default, a new BPF will always include a single stage. Clicking on it will allow you to modify its name, category and, for the second stage onwards, its associated table. A stage can contain two subcategories of additional components, the first of which is\u0026hellip; \u0026hellip;the data steps or fields to be populated. You can add multiple of these to each stage, and you can define their order. The columns available for selection must exist on the table associated with the current stage (i.e. it is impossible to reference related table columns). Finally, you can also define a name for the step, which can differ from the field\u0026rsquo;s display name. The next type of component that you can add to a stage is\u0026hellip; \u0026hellip;a workflow step. We\u0026rsquo;ll discuss this component type in further detail shortly. Here, you can see a visual representation of your current BPF. You can hide this window by using the button on the top right. Finally, the right-hand pane allows you to access the Components toolbox or the currently selected component\u0026rsquo;s Properties. In this example, because we have the 1st stage selected, the Properties pane for this surfaces. Component Overview We\u0026rsquo;ve alluded to this topic already, but it is worth discussing in detail the complete list of different component types that can be utilised within a BPF, as outlined below:\nClicking or dragging each component will add it into the appropriate area on the canvas designer. Let\u0026rsquo;s dive in and discuss each one:\nFlow: Stage: Requires no further explanation, I think. 🙂 Condition: With this, you can specify branching rules that will modify how the Process proceeds for the user based on conditions that you specify. Evaluation of conditions is performed on a similar basis to Business Rules, in that you can select one or multiple fields on the current table to evaluate, as well as implementing some basic AND/OR evaluation logic. Once a condition is defined, you must then populate details for each subsequent stage, for when the condition evaluates to both true and false. Through the correct use of conditions, you can potentially consolidate several BPF\u0026rsquo;s into one and, ultimately, achieve the same outcome. Composition: Data Step: A concept mentioned earlier, data steps are the fields that the user must populate for a given stage. Workflow: Via this feature, it is possible to automatically trigger a workflow to execute, either when the stage commences or finishes. This feature could be helpful if, for example, you would like to send an email out to a sales manager after a user successfully qualifies a Lead. We can only use workflows with a BPF if activated and configured as on-demand for the same table that the current stage targets. Action Step: Operating on a similar basis to the Workflow component, this allows you to trigger an Action instead. Flow Step (Preview): Finally, similar to the previous two-component types, it\u0026rsquo;s also possible to trigger a Power Automate flow within a BPF stage. As a feature that is (at the time of writing) still in preview, it is generally not recommended for use within a production environment. As such, BPF\u0026rsquo;s can be incredibly powerful when integrated alongside tools such as Power Automate flows, allowing you to automate substantial aspects of a business process as a user is working through each appropriate stage. From a developers standpoint, the emphasis is to utilise these tools wherever possible and only resort to custom code if you cannot achieve the business requirement natively within the platform.\nThe Hidden Table The final thing to note with all BPF\u0026rsquo;s is that, upon creation, the system will automatically create a new table in your Dataverse environment. This table will have the same name as the BPF in question and, for each active process that users create in the system, the system will create a corresponding row within this table. The table has several custom columns that capture a range of valuable properties, several of which we can access easily within the application. For example:\nActive Stage (activestageid): The name/details of the current, active stage on the BPF instance. Active Stage Started On (activestagestartedon): The date on which a user selected the current, active stage for the BPF instance. Completed On (completedon): The date on which a user completed the BPF instance. Duration (duration): This field indicates the time between the start and completion date of the BPF instance. Also, the system will create lookup fields for each appropriate table row associated with the BPF.\nDevelopers can freely utilise this table as part of any bespoke solution they develop and also, if required, create additional columns, views etc., with no restrictions. I recommended that you always package up any BPF table with the same solution where your BPF exists so that you can ensure the successful deployment of all applicable customisations.\nExtensibility Options As developers, we have a few options for working with Business Process Flows within code - either from the client-side perspective (i.e. a model-driven app form) or from the server-side (i.e. via a plug-in or the Web API). We will be diving into these specific topics in later posts, but it\u0026rsquo;s worth briefly touching upon how we can work with Business Process Flows in this context:\nVia the various formContext.ui.process methods exposed to us, we can modify multiple details relating to the currently assigned process to change its display state on the form or hide it completely. As mentioned already, the hidden table stores details about each instance of a Business Process Flow. We must work with this programmatically if we wish to do things such as automatically moving a process to the next stage. This entity will record an activestageid field that we can update to meet this specific scenario. Microsoft has provided a whole article devoted to this subject, which is well worth a read.\nDemo: Designing and Interacting with a Business Process Flow To help familiarise yourself with some of the concepts discussed in this post, take a look at the below video, where I will show you how to create a BPF and interact with it as part of a model-driven app:\nBusiness Rules Overview In days gone past, when there was a need to carry out more complex, logic-based actions directly on a model-driven Power App form, client-side scripting via JScript form functions would have been the only mechanism to satisfy this requirement. Examples of the types of things I mean by this include setting the value of a form\u0026rsquo;s field based on when another field changes, showing/hiding fields or even displaying error messages when a user violates a business condition. Given the specialist knowledge area involved, having to bring in someone with the required technical knowledge to achieve what I certainly see as basic requirements can introduce a degree of complexity and technical/monetary cost into a project or the solutions ongoing maintenance.\nFor this reason, Microsoft provides a route to satisfy some of the common scenarios highlighted above via Business Rules. These offer a built-in and supported mechanism to implement client-level or even platform-level operations when certain conditions are satisfied. They are technically classed as a Process by the application and, like classic Workflows, must always be bound to a single table on creation. From an execution standpoint, they almost certainly interact and work with the same client-side API\u0026rsquo;s Microsoft expose to developers writing JScript form functions, but with a reduced risk of causing potential end-user errors when you implement them. They can also negate the need for prolonged testing or factoring in as part of a system upgrade, as they should \u0026ldquo;just work\u0026rdquo; in all circumstances. In short, an effective Power Platform developer should use and push Business Rules to their absolute limit before even considering writing a single line of JScript. This is a topic I have banged on about evangelised over previously on the blog, and it is worth repeating more than ever, particularly in the context of the PL-400 exam.\nBusiness Rule Visual Editor Business Rules are created in the Power Apps maker portal by navigating to your target table and selecting the Business Rules tab. From here, the visual editor will load, represented in the screenshot below:\nThe editor contains many buttons and areas that it\u0026rsquo;s worth getting more familiar with, so you can fully appreciate Business Rules\u0026rsquo; capabilities. Each one has been numbered and explained in detail in the list that follows:\nExpanding the arrow will allow you to define a name (mandatory) and a description for your Business Rule. I highly recommend always providing a helpful description of any Business Rule, to assist any future developers or to even give your memory a jog down the road. 😀 From this area, you can save your Business Rule, validate its current structure to highlight any warnings/errors, define the Business Rule\u0026rsquo;s scope (discussed in more detail shortly) or access some of the available help articles for the Power Platform. The icons here let you quickly add on an additional component to your Business Rule or perform standard operations, such as cut, copy, paste and delete. On these, it is worth highlighting that the familiar keyboard shortcuts for each action will also let you perform it as well. Finally, you can use the Snapshot button to download a .png image file of the Business Rule, which you can then include as part of any documentation or training materials. The visual editor\u0026rsquo;s central part allows you to select, drag \u0026amp; drop or re-order any added components via your computer\u0026rsquo;s mouse. Selecting each component will also display its underlying properties within the area highlighted in 6 on the screenshot. For example, the properties tab for the Lock/Unlock action resembles the following: Here, you can toggle between the Components and Properties tab. The Components tab, when selected, will display a list of all possible components you can add to your Business Rule. The following section will discuss all of these in further detail. Traditionally, Business Rules would be specified and built out using a text-based view, defined within IF\u0026hellip;THEN\u0026hellip;ELSE logic. This experience is persisted as part of the new visual editor experience to provide a precise and straightforward mechanism for you to validate your Business Rule\u0026rsquo;s logic. Finally, the options here let you zoom in/out and adjust the view to fit the visual editor\u0026rsquo;s canvas. Available Components A Business Rule comprises several potential components, depending on the complexity of the business logic that needs to occur. These are defined under the two categories of Flow and Actions. Within the visual editor, you will see a handy \u0026ldquo;toolbox\u0026rdquo; on the Components tab that lists everything we can double click/drag and drop onto a Business Rule:\nA description of each of these is found below:\nConditions: As the central component within all Business Rules, at least one of these must exist. From there, you can define multiple rule sets; these represent the specific field on the current table whose value you wish to evaluate. You have a range of operators at your disposal when performing evaluations, primarily dictated by the selected field\u0026rsquo;s data type. For example, a text field has operators such as Contains, Equals, Begins with etc. When multiple rule sets are defined, you must also indicate the type of grouping to apply to the evaluation - AND or OR. It\u0026rsquo;s not possible to specify more granular grouping rules underneath this level. Recommendations: In certain situations, you may wish to guide users towards populating a form in a specific manner, based on other inputs that have been made to the record so far. This component type meets this requirement by showing recommendation text to the user alongside a button. Once pressed, the Business Rule will then apply any number of updates to other fields on the form. It is then left up to the user to decide whether they accept the recommendation or ignore it entirely. Lock/Unlock: With this, you can set or remove a fields read-only property on the form. Show Error Message: By selecting a field and specifying a custom message, it\u0026rsquo;s possible to display any custom error message that will bind itself to the field chosen. Set Field Value: Perhaps one of the most potent components at our disposal allows you to auto-populate other fields on the form when our stated conditions are met. You can configure this component in one of three ways, based on the Type value you specify - you can choose to provide a custom value (Value), set the field to match another on the form (Field) or even remove all data from a field (Clear). Set Default Value: This works on a similar premise as the Set Field Value component, with the exception that it does not include a Clear option and is instead designed to populate a field automatically when creating a new record. Set Visibility: Using this component type, you can toggle whether a field appears or gets hidden from the user. As highlighted earlier, each component has its own set of distinct properties that you must specify for it to work correctly. It\u0026rsquo;s worth spending some time familiarising yourself with these properties and how they behave on a table form once published.\nScope An important concept to grasp relating to Business Rules is the circumstances around their application, dictated by its Scope. Depending on the scope, your Business Rules logic could be executed as expected or not at all. You should, therefore, consider which value to select for this property to ensure your Business Rule always runs as expected. The list of possible values you can choose for this include:\nSpecific Form: A list of all available forms for the table will display in the dropdown box, meaning it is possible to scope a Business Rule to a single form only. All Forms: Does exactly what it says on the tin 🙂 Entity: As we\u0026rsquo;ve seen so far, most available components for a Business Rule relate strictly to a form itself. Having a Business Rule with a scope of Entity will ensure that the appropriate action occurs, regardless of whether the user updates the row via a form, Power Automate flow or SDK call. While potentially a powerful mechanism of enforcing business logic across an entire Dataverse instance, consider carefully the impact this action may have in conjunction with your broader solution. Demo: Working with Business Rules In the video below, see how it is possible to create a simple Business Rule and how it then behaves within a model-driven app:\nWe\u0026rsquo;ve rounded up our discussion of the functional Power Platform components that you\u0026rsquo;ll need to have an awareness of for the PL-400 exam. In the next post in the series, we will look at our first code-related topic by evaluating client-side scripting options involving JScript, TypeScript and the Web API.\n","date":"2021-04-11T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-implementing-business-process-flows-and-business-rules/","title":"Exam PL-400 Revision Notes: Implementing Business Process Flows and Business Rules"},{"content":"Welcome to the tenth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last week, we reviewed the various tools available to help diagnose and troubleshoot issues with our Power Apps. With that, we finished our focus on the Create and configure Power Apps area of the exam. We now move onto the Configure business process automation area, which has a minor weighting on the exam (5-10%). Nevertheless, in the first topic area, candidates are expected to demonstrate knowledge of the following:\nConfigure Power Automate\ncreate and configure a flow configure steps to use Dataverse connector actions and triggers implement complex expressions in flow steps implement error handling troubleshoot flows by analyzing JSON responses from connectors Power Automate is perhaps one of the most powerful automation tools we have at our disposal within the Power Platform. Let\u0026rsquo;s dive in and see what we can do with them as developers.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nNote on Terminology In late 2020, shortly before the PL-400 exam went into general availability, Microsoft made some significant changes to the core terminology in some aspects of the Power Platform, chiefly affecting many of the concepts discussed in today\u0026rsquo;s post. I discussed these changes as part of a blog post last year, but, in a nutshell:\nMicrosoft Dataverse became the new name for what was previously known as the Common Data Service. This change was announced and discussed further by Ryan Cunningham in a blog post last year. Within Microsoft Dataverse, many of the core features were subject to terminology changes. For example, entities are now called tables. Microsoft has provided an excellent summary that you can use to refer to the previous and new terms. Microsoft has recently updated the PL-400 exam to reflect the new terminology. However, references to the old language occasionally appear in the tools discussed in this post. Therefore, you should take steps to familiarise yourself with the previous terms.\nPower Automate Overview Previously known as Microsoft Flow, Power Automate is the logical evolution of the traditional workflow experience from Dynamics CRM / 365 and - over time - we can anticipate Power Automate slowly replacing this functionality. As well as affording us near-total feature parity with workflows, Power Automate flows provide us with a modern and highly tailorable means to automate a business process, using the capabilities within Azure Logic Apps to power everything. Through a visual editor, developers can build out an entire business process from start to finish. Also, via the implementation of traditional programming flow controls, it\u0026rsquo;s possible to make incredibly diverse systems talk together in ways that you could not have imagined previously. Some of the benefits that Power Automate provide us include:\nAccess to well-over 315+ connectors to various data sources and applications, including SQL Server, Salesforce, Twitter, MailChimp and SendGrid, to name a few. The ability to trigger flows based on specific trigger points or schedules. Through implementing an on-premise data gateway, data sources not exposed for access across the internet become surfaced for use within Power Automate. Integrates seamlessly with canvas Power Apps, thereby allowing you to trigger a flow based on a specific action point within a canvas app. Robust testing and debugging capabilities. Allowing us to implement complex or straightforward approval workflows. Full support for use alongside solutions, letting us quickly deploy Power Automate flows out to multiple environments and incorporate flow design as part of your Application Lifecycle Management (ALM) processes. When your solution needs to scale, you can easily extract your flow and migrate it into Logic Apps to take advantage of more enterprise-focused features. Power Automate flows, as mentioned, are designed to fill the gap for workflows. But they can go so much further than that and, often, negate the need to look at implementing plug-in\u0026rsquo;s or custom workflow activities to achieve more complex integrations. Developers should always explore and attempt to use Power Automate flows wherever possible before resorting to other solutions. Having full awareness of all the available connectors for Power Automate and also on the products specific limitations, from an execution standpoint, will help you best in making this judgement call.\nCreating a Power Automate Flow The Power Automate portal is your go-to destination for working with Power Automate. Accessible from within the Office 365 portal, within here you can:\nManage any approvals assigned to you by other users. View, manage, edit or delete any flows created by you. Manage the various connectors setup for your account or within your organisation. Interact and work with Microsoft Dataverse. View and utilise existing flow templates, covering common business scenarios. Access and leverage AI Builder capabilities. From a Power Automate flow creation standpoint, the main focus of your time will be within the visual editor, as indicated below:\nThis experience provides a fully immersive mechanism for developing your flows without needing to install additional tools onto your machine. After completing and saving your flow for the first time, it will start executing based on the predefined trigger action chosen for it. You can then do the following with a completed flow:\nShare it with other users within the organisation. Submit it as a template to Microsoft for inclusion within the template gallery. Export its definition as a solution file or a Logic Apps template. View analytics relating to it and its execution, e.g. total runs per day, number of errors raised etc. This feature utilises Power BI tiles, providing an intuitive experience when analysing the metrics for your flow. Toggle whether your flow is on or off. View a list of all previous runs for the flow, including the input/output information for each step. In short, whether you need to test, diagnose or figure out whether users in your organisations are still using a flow, there are tools here to help you along.\nPower Automate flows are built up of various core components, all of which you will need to be familiar with for the exam:\nConnectors These are the fundamental components that make Power Automate such a versatile solution by giving you the ability to connect up to various services or solutions, with a range of corresponding actions then made available for use. Typically, you will need to authenticate with each service using a valid username or password; once created, it is stored and available to use across multiple flows, if required. Connections will also be shared out to users automatically whenever you do the same for the flow itself. Consider carefully what impact this may have from a permissions or data protection perspective.\nAs mentioned earlier, the list of available connectors is vast and is growing all the time. Note also that, similar to Power Apps, Microsoft classifies some connectors as Premium. Connectors of this type will only be available to you if you have been assigned a paid Power Automate license, and Microsoft mark these accordingly within the Power Automate portal:\nDevelopers also can build custom connectors to either use within their current tenant or publish for availability to any Power Automate the world over. This topic, which has its specific exam area for PL-400, will be covered in a future post.\nIt is worth discussing the Microsoft Dataverse (AKA Common Data Service) connector in further detail at this stage - not only because it\u0026rsquo;s a topic for the exam but also because it may lead to a degree of confusion when you first open Power Automate. The reason for this is that there are two CDS connectors available:\nThe first of these connectors is a solution-independent connector that has the following triggers/actions defined for it:\nThis connector allows you to connect to any Microsoft Dataverse tenant, regardless of the Office 365 tenant it resides within. This connector is most useful when your flow will always target a single environment. As such, there is no need to manage it formally within a solution, and you would only use it to perform basic CRUD operations targeting your Dataverse environment.\nThe second connector - titled Common Data Service (current environment) - is the complete opposite of the above and built for scenarios where you need to include your flow as part of a solution. Doing so will ensure the flow correctly detects the correct Dataverse environment to target after importing your solution successfully. As such, there is a far greater list of available actions for this connector, and a single trigger action is provided that covers all potential scenarios within CDS:\nWherever possible, you should be using the Common Data Service (current environment) connector and managing your flows within a solution. This will significantly simplify the process for rolling flows into different Dataverse environments should the need arise. For the exam, I would recommend that you read up on the standard and the current environment connector to be familiar with each one\u0026rsquo;s capabilities.\nTriggers In a nutshell, these are the things that start a Power Automate flow. A flow must always have a single trigger, and there are, broadly, three different types available to us:\nAction: These will typically occur based on an event within an application or system - for example, whenever a user creates a new row in Dataverse. Your Power Automate flow will poll the data source frequently to detect when this action step occurs and then kick off the flow accordingly. In the screenshot below, we see an example of the Dataverse action for creating a new row: Schedule: Flows of this type will run based on a pre-defined schedule. We have a high degree of control over the various settings here, including frequency, interval, time zone and start time. The example screenshot below illustrates a schedule that runs every day at 10 AM GMT: Manual Trigger: Finally, it\u0026rsquo;s possible only to execute a flow only when you need to. As part of this, it\u0026rsquo;s possible to specify different user input parameters to further leverage within the flow to modify its behaviour. The various types of user inputs available are illustrated in the example below: You can find out more about using this trigger type from the portal or via a mobile app on the Microsoft Docs website. Actions We\u0026rsquo;ve just discussed the component that starts your flow; actions lead on from this by dictating what your flow does. You must have at least one action in your flow that leads on from your appropriate triggers. Depending on the connector you use, actions can range from performing simple CRUD type operations to more advanced tasks, such as sending an e-mail, creating a file or laying dormant until an approval is received. In the example below, we can see we have an action that retrieves the top 10 Contacts from the system, ordered by the createdon column in descending order, whenever an Account row is updated:\nControl Although technically an action type, it is worth studying the various control actions available to us, all of which allow us to implement programming-like logic flow into our Power Automate flows:\nCondition: These are akin to your traditional if programming tests, allowing you to perform different actions based on whether the condition is met successfully or not. You can specify multiple conditions as part of this, using AND/OR logic, and also group conditions together if needed. An empty condition within Power Automate resembles the below: Apply to each: You will use this condition type the most when processing multiple result sets and then work on a similar principle as your traditional foreach programming loops. By specifying an appropriate collection or list of items, you can then execute one or multiple actions affecting each record. This control type can be helpful if, for example, you need to update a list of CDS Contact records in bulk whenever an Account record is updated. Do until: This is useful for when you need to act until a condition is true, and they work on a similar basis as a while loop. You specify the control to keep checking each time any sub-actions complete and, once the condition is met, execution will stop. Using the built-in expression language within flow in conjunction with this (more on this shortly), it is possible to construct more complex conditions to evaluate continually. Scope: These provide a mechanism to group multiple related actions to be collapsed/opened more easily within the visual designer. They are a purely optional component but can help in the context of \u0026ldquo;grouping\u0026rdquo; or in triggering specific success/failure action steps for a set of co-dependent action steps. Switch: Out of all of these, the name of this one very closely mirrors its equivalent C# programming feature. Using it, you can evaluate a specific value and then call action(s) based on this value. Finally, if it\u0026rsquo;s impossible to determine a matching value, you can instead execute a default action. Depending on your scenario, a Switch could be a far more natural solution to use compared with Conditions. Terminate: Finally, this action lets you immediately stop the flow, with a high degree of control over how to end it. Their nearest programming equivalent would be a throw, but the main difference here is that there are three different statuses that you can terminate a flow at: Failed Cancelled Succeeded You can also (optionally) display an error code or message as part of this. They can act as a valuable means of forcing bespoke errors within a flow if, for example, a business process or control has been violated by not including a specific field as part of creating a new record. Control actions can be a potent tool, allowing a single flow to carry out widely different actions based on the data that traverses through it.\nWorking with Expressions We saw previously how canvas apps have an expression language, allowing us to programmatically trigger actions or alter how an app behaves, using an easy to use range of formulas. Power Automate works on a similar basis by having a different expression language, which is a valuable tool for performing a complex evaluation of conditions. The language derives from the Workflow Definition Language (WDL) used by Logic Apps, and it is possible to start working with expressions via a dedicated pop-up dialog window that appears:\nThis view will display a list of the most commonly used expressions, along with their appropriate syntax, that you can then include in your flow by clicking each one, modifying it within the expression language bar and then pressing the OK button.\nFor the PL-400 exam, it is impossible to discuss, learn and demonstrate every possible function that is available to us - as indicated by the online reference guide, the number of currently available functions is well into the hundreds. Instead, it\u0026rsquo;s worth focusing on a few of the more commonly used ones in each category, so you can get a feel of what is broadly possible to achieve:\nconcat: One of the many possible string functions available to us, this allows you to combine two or more strings. For example, using concat(\u0026ldquo;This \u0026ldquo;, \u0026ldquo;is \u0026ldquo;, \u0026ldquo;a \u0026ldquo;test.\u0026rdquo;) would return This is a test. Other functions in this category cover everyday situations such as converting a string to upper case or returning a selection of characters from a string. length: This will return a number indicating the number of items/records within a collection that you specify. Note that as a collection function, this can only be used with arrays or strings that conform to a JSON structure. Within this category of functions, you can also do things such as union joins or returning the very first item in a collection. and: As a logical comparison function, this function will evaluate all values supplied to it and, if matching, return true; otherwise, false. The full range of logical comparison functions ranges from greater than, less than, or and even if statement evaluations, allowing us to fine-tune the conditional nature of our flows further. int: Allows you to convert a string into an integer value, provided it is a valid numeric value. There are conversion functions aplenty within Power Automate, which int forms a part of, and other functions of this type allow us to create a brand new array value, decode base64 strings or create a URI-encoded version of a web URL. min: Based on evaluating an array or list of numeric values, this function will return the lowest value detected. Power Automate has many different mathematical functions at our disposal, allowing us to perform simple additions, divisions or even generate random numbers. utcNow: This function returns the current date and time within UTC format. It\u0026rsquo;s one of the many date and time functions available to us. Others include functions to add a specified number of days to a date, convert a date into any given timezone or return the day portion of a date. variables: Returns the value of a variable you have specified within the flow, which can be useful when performing conditional evaluations. This function is one of the workflow functions, with other functions available to return the individual results of actions within a scoped action, metadata relating to the flows execution, or an action\u0026rsquo;s output. To find out more about working with variables, you can check out this handy video from Matt \u0026ldquo;The D365 Geek\u0026rdquo; Collins-Jones, where he will show you how to set a variable within a Power Automate flow. uriPath: For situations where you need to extract the path value of a URL (i.e. the bit after the domain portion, such as /mypage.html), this function will let you achieve this. Other URI parsing functions are available that do similar things, such as returning the host, port, or query from a URL instead. removeProperty: Finally, this function will remove a specified property from a JSON object, returning the updated object for you to process further if required. This function is part of the small list of JSON/XML manipulation functions, with additional functions available to add or update the properties in a JSON object instead or execute simple XPath queries on an XML document. Experienced developers should have little difficulty grappling with the type of functionality available here, which should closely mirror what you will typically find in languages, such as C#. This factor only emphasises further the importance of considering flow early on as part of designing your solution.\nDemo: Creating a Power Automate Flow To get a feel for how to work with Power Automate flows, check out the video below:\nPower Automate is a worthy topic for many blog posts and videos, and I wouldn\u0026rsquo;t worry too much about becoming an expert for the PL-400 exam. Just take time to familiarise yourself with the use cases and broad capabilities they bring to the table. In next week\u0026rsquo;s post, we\u0026rsquo;ll round up our discussion on business process automation by looking at Business Rules and Business Process Flows. These excellent tools can help you get further mileage out of your model-driven Power Apps.\n","date":"2021-04-04T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-working-with-power-automate-flows/","title":"Exam PL-400 Revision Notes: Working with Power Automate Flows"},{"content":"Welcome to the ninth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In the last post, we looked at the second type of Power App available to us, model-driven apps, and how we can best leverage them to meet particular scenarios. With this topic covered, we should now have a good grasp of how to create a Power App. But this is only one area that we, as developers, must be responsible for; we must also ensure we leverage the tools available to diagnose and resolve problems that arise with our apps. This musing leads us rather conveniently into the next exam area, Manage and troubleshoot apps, which includes the following skills measured:\nManage and troubleshoot apps\ntroubleshoot app issues by using Monitor and other browser-based debugging tools interpret results from App Checker and Solution Checker identify and resolve connector and API errors optimize app performance including pre-loading data and query delegation Microsoft has recently made several investments in this area, meaning that we are in the best position than ever before to resolve issues with our apps. Let\u0026rsquo;s dive in and see what\u0026rsquo;s available.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nWhat is Monitor? Often, when attempting to figure out why we\u0026rsquo;re getting a specific error in our app, we can get frustrated when determining the precise cause. Compounding this problem further is when the issue occurs for a particular user, and you cannot replicate it yourself. The Power Apps Monitor aims to address these scenarios and more by letting us view the entire log activity for our apps as other users, or we perform specific actions within them. It can also be beneficial from different perspectives too. For example, if your app runs very slowly, you can use the Monitor to detect the amount of time it takes for data to be loaded and verify what we\u0026rsquo;ve brought back each time. Through this analysis, you can identify mechanisms to filter your data or explore potential options to cache data locally instead. It\u0026rsquo;s worth noting you can use the Monitor with both model-driven and canvas Power Apps, and there are some crucial restrictions regarding its usage that you should bear in mind. There may also be situations where you must refer to browser-based debugging tools instead of Monitor to troubleshoot your issues, such as those involving model-driven app form script errors or tabs/sections not appearing correctly. Microsoft provides a whole article that you can refer to in these circumstances.\nYou can get started with Monitor by simply right-clicking your app in the Maker portal and selecting the appropriate option:\nOnce launched in a browser tab, you can then start to run your app within another browser tab by selecting the Play published app button in the top right of the screen:\nFrom there, you should start to see data populating through:\nThe actual output will vary significantly, depending on what your app is doing. But each core event will return the following properties that you can inspect:\nDetails: This will provide high-level summary information points, such as the status, response type, size and any corresponding data. Formula: This will show the evaluated formula alongside the appropriate event handler that we attached to it. Request: When your action involves retrieving data from an external system, this tab will appear and show the raw request details in JSON format. You can also view the appropriate HTTP status codes for the request, which can be particularly useful in starting to identify issues with a specific connector or authentication mechanism leveraged. Response: This final tab will show the raw data received from the corresponding request, if successful. The Monitor can help in the context of development, but even more so when you invite or connect users up to the tool. By taking this step, we can then perform remote debugging of the app, often identifying specific problems that we cannot replicate elsewhere. In short, Monitor can be a flexible and powerful tool in your arsenal when used correctly.\nUsing App Checker to Make Accessible Canvas Apps These days, an app developer\u0026rsquo;s vital concern is to ensure their apps are functional and meet accessibility requirements. The individuals using our applications can often have diverse needs we must be conscious of and provide support for proactively. For example, individuals with vision impairment may require screen reader software to interact with an application and its underlying data. Ensuring, therefore, that we have configured appropriate labels for each of the controls within our app will ultimately ensure that these individuals can work without further assistance required.\nTo help support us in this objective, we can leverage the App Checker within the canvas Power Apps Studio to identify these issues and others relating to formulas, runtime errors, rules, and performance. Remember that, unlike the Monitor, the App Checker is available for canvas Power Apps only. We can access the Checker straightforwardly enough by selecting the appropriate button within the studio:\nIn this case, as we can see, we are missing accessible labels from numerous controls within the app. So perhaps I need to practice what I preach a bit more! 😅😳 Developers can click onto each highlighted issue to quickly load the offending control and provide the appropriate fix. Once a developer has built their first canvas app, they should carefully review the App Checker output to understand the things to fix; from there, you can use the App Checker as a quick, final step for any new apps, to ensure you\u0026rsquo;ve not overlooked anything.\nSolution Checker Overview As developers, we should always have \u0026ldquo;best practice\u0026rdquo; approaches discussed earlier top of mind as part of our daily work. Particularly when it comes to putting together coded solutions involving JavaScript or C# within Microsoft Dataverse and model-driven Power Apps, we should avoid situations where we are:\nUsing parallel execution patterns within our plug-ins. Specifying all columns to be returned as part of a Retrieve or RetrieveMultiple request within a plug-in Leveraging non-strict equality comparisons within JavaScript Using console.log in production-ready JavaScript code These \u0026ldquo;best practices\u0026rdquo; typically consist of a mixture of industry recommendations and specific recommendations made by Microsoft, based on their extensive history supporting these applications within the cloud. And now, thanks to the capabilities on offer as part of the Power Apps Solution Checker, we can very straightforwardly identify these issues as we are working with the platform each day. We can run the Solution Checker at any time within the maker portal by selecting our unmanaged solution and then the appropriate option:\nThe check usually takes a couple of minutes to run to completion and, from there, you\u0026rsquo;ll be able to review and consume the results. Any issues flagged by the App Checker will also surface here, therefore allowing you more than ample opportunity to identify the problem before moving your app forward into production 😉. I recommend running the Solution Checker before any deployment, as it\u0026rsquo;s generally pretty good at flagging up issues that you may have overlooked.\nOptimising Canvas App Performance Best practice approaches can sometimes only take us part of the way towards building a well-running and optimised solution. Additionally, it can sometimes be unclear how we can go about this as developers of Power Platform solutions. For the specific purpose of canvas apps (and, indeed, for the exam itself), you should keep in mind the following throughout your development lifecycle:\nInstead of loading and surfacing data directly from your data source on each screen, consider instead implementing a collection and use commands such as Collect, Clear and ClearCollect to bring this data in when your application starts or when your screen loads for the first time via the OnVisible event handler. This will allow you to persist your data locally within the app itself, improving load times. Just make sure you run an appropriate Patch or Update to get your data updated back to the source, as the app will not do this for you by default. Depending on the data source you are working with, delegation may or may not apply as you query data from this source. There will be a specific set of formulas that will support delegation, certain functions that don\u0026rsquo;t support it and situations where it does not apply and, as such, issues may be encountered when working with records over a certain threshold (500 by default). The canvas app studio will indicate when a particular formula does not support delegation. As developers, we must evaluate the impact this has on our application, taking into account the record limit spoken about earlier. For further details on how to identify and optimise your canvas apps, you can review Microsoft\u0026rsquo;s list of common sources for slow performance, common resolution steps and tips to improve performance.\nAs we can see, there is no good excuse not to devote time towards building quality applications, and Microsoft provides us with several tools in support of this objective. We must also continually evaluate our solutions as they are developed and deployed to ensure we are proactive with the individuals who use the solutions we build. In the next post in the series, we\u0026rsquo;ll be moving along to look at the Power Platform\u0026rsquo;s various business process automation tools.\n","date":"2021-03-28T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-managing-and-troubleshooting-a-power-app/","title":"Exam PL-400 Revision Notes: Managing and Troubleshooting a Power App"},{"content":"To help administrators or those of the DevOps persuasion better manage and automate core tasks involving the Power Platform, Microsoft has released two PowerShell modules that you can leverage - Microsoft.PowerApps.Administration.PowerShell and Microsoft.PowerApps.PowerShell. Using these, you can accomplish tasks such as:\nGetting details about each environment within your tenant or even doing things such as changing their name. Managing the various apps deployed to an environment - to the point where you can remove them entirely if desired. Administrating your Power Automate flows. You can enable or disable them or even modify their permissions. Work with advanced components such as connectors, their permissions and URL patterns for a DLP policy. Many of these operations are available within the relevant area within Power Apps, Power Automate or the Power Platform Admin Center, meaning that you don\u0026rsquo;t necessarily need to turn to PowerShell as your tool of preference. However, if you\u0026rsquo;re looking to, say, call these operations as part of an Azure DevOps task or any custom scripts your author, the set of cmdlets on offer will start to become a real boon to you.\nInstallation is pretty straightforward, as this article attests to. But if you attempt this using PowerShell 7 and run the Add-PowerAppsAccount cmdlet to start working with your tenant, you will get an error similar to the below.\nIn my fruitless search online, I found some posts from people experiencing the same issue and others with solutions that didn\u0026rsquo;t work. In the end, the answer turned out to be surprisingly simple - use PowerShell 5 instead. And as we can see below, gets this cmdlet working all fine and dandy:\nTo add insult to injury, a quick check back to the requirements of the modules confirms this basic fact and also my complete inability to read simple instructions 😅:\nThe modules described in this document, use .NET Framework. This makes it incompatible with PowerShell 6.0 and later, which uses .NET Core.\nA huge thank you to Raphael Pothin for pointing me in the right direction on this one - in retrospect, this was a pretty significant thing to overlook, but perhaps I was hoping too much that these tools would work on PowerShell 7. Hopefully, we may see future support for this, given the inherent advantages that .NET Core has from a cross-platform perspective.\n","date":"2021-03-21T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exception-calling-acquiretoken-with-4-arguments-error-with-add-powerappsaccount-powershell-cmdlet/","title":"'Exception calling \"AcquireToken with \"4\" argument(s)\" Error with Add-PowerAppsAccount PowerShell Cmdlet"},{"content":"I have a lot of appreciation for the Dynamics 365 Sales Professional application. It\u0026rsquo;s an excellent, small CRM system for smaller organisations that provides those with longstanding familiarity with the Dynamics 365 Enterprise apps or Dynamics CRM the comfort and flexibility to deploy a fully functional CRM system quickly, but with the scope to extend out incredibly easily. It also exposes out some functionality that is not present in the Dynamics 365 Sales Enterprise app - for example, we get a whole dedicated admin and setup area that lets us very quickly work with standard features that we would typically need to go out into the \u0026ldquo;classic\u0026rdquo; experience to implement:\nDynamics 365 Sales Professional is a great candidate for you to consider if you want a CRM system that can scale up readily into the equivalent enterprise offering at the drop of a hat. Just be aware of some of its limitations from a licensing standpoint (e.g. you can only use up to 15 custom entities as part of the license agreement) and some of the strange quirks of behaviour you may encounter working with it.\nCase in point - as part of a recent project, we were working on implementing the fairly standard Quote to Order conversion process that has been baked into the application for a long time. The trouble was, the button to do this was not present on the ribbon at all:\nAs it turns out, on closer inspection (and reading the specific documentation covering the Professional app 😉), there is a particular setting we need to enable first within the settings area above, which will then ensure this button renders out as intended. Simply select the Quote Settings option above and ensure that the radio button indicated below is enabled:\nWith this property enabled, as if by magic, we will see the button start to render on our Active Quote records:\nFrom there, everything (should) work as expected. The lesson here is that regardless of how many year\u0026rsquo;s you\u0026rsquo;ve spent working with a cloud system, it always does help to take a quick look at the latest documentation to avoid any unwelcome surprises. 😏\n","date":"2021-03-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/wheres-the-create-order-button-on-my-active-quote-records-dynamics-365-sales-professional/","title":"Where's the Create Order Button on my Active Quote records? (Dynamics 365 Sales Professional)"},{"content":"Much in the same manner as the equivalent event last year, the Microsoft Ignite virtual 2021 conference has seen its own set of announcements that anyone working in the Microsoft cloud space should familiarise themselves with. Of particular note, I was interested to see:\nPower BI Premium Per User moving into general availability, with the announced price point being leaps and bounds competitive over what Microsoft previously teased. In some cases, we can get this new capability for $10 per user per month - wow! 🤯 Power Automate Desktop being made available for free to all Windows 10 users. This release will allow any desktop user to automate their daily tasks via Robotic Process Automation (RPA) and play them back via their local machines without needing a Power Automate subscription. Further detail regarding Microsoft Viva, including the rollout of Microsoft Viva Connections into general availability. Perhaps one of the most relevant and important announcements that I wanted to focus on was Microsoft\u0026rsquo;s introduction of a new low-code programming language for the Power Platform by Ryan Cunnigham - Microsoft Power Fx. As Greg Lindhorst\u0026rsquo;s post explains in detail, it heralds several benefits for those of us adopting the platform. With these big announcements, we can sometimes get lost in the headlines and miss out on some of the underlying detail. Let\u0026rsquo;s dive straight in, see what all the fuss is about, and better understand some of the key announcements.\nPower Fx is nothing new. Often, with these big marketing announcements, you can sometimes lose sight of the technical side of things\u0026hellip;or, in some cases, walk away with the impression that we\u0026rsquo;re getting something brand new. 😉 Suffice to say, what we see with Power Fx is not a whole brand new language or a replacement of any existing functionality currently baked into the platform. Instead, Microsoft gives a formal name to the current low-code language that forms the canvas Power App authoring experience\u0026rsquo;s bedrock. Apart from its new name, this language remains unchanged, and the announcement starts to herald its coming importance within other areas of the Power Platform in the months and years ahead. So we can perhaps take some relief because, for a lot of people, you won\u0026rsquo;t need to learn something completely new from scratch. 😅\nBuilt to be as straightforward as Excel formulas For a low-code platform to be successful, any underlying formula or expression language it uses needs to be easy to understand and, perhaps crucially, not a far cry from the types of languages citizen developers are already familiar with. From a citizen developer standpoint, Microsoft Excel\u0026rsquo;s expression language may be one they find most familiar. These days, many business users may find themselves more comfortable working in this context and, by comparison, daunted by the Power Platform when they first approach it. With this in mind, one of the goals of the Power Fx language is to make it as simple and straightforward to grasp as an Excel formula. They are already off to a great start - today, there are over 75+ Power Fx functions that are syntactically identical or similar to an equivalent Excel version. This opens the gap to power Excel users to adopt the Power Platform and not be afraid that they will be \u0026ldquo;losing\u0026rdquo; anything; instead, the platform has the tools and capabilities they are used to and more.\nOpen-Source Foundations From the outset, the product team actively looks for the existing Power Platform community to contribute, share ideas and shape the future of Power Fx so it best addresses the needs and scenarios you face. Nothing typifies this better than Power Fx being now available on GitHub as an open-source project. It\u0026rsquo;s very much early days with all of this, and I hasten to add - we don\u0026rsquo;t yet have sight of the actual source code or binaries. Still, there is already a lot of documentation on there for you to review, digest, and provide comments or improvements to help make the language better to use. I\u0026rsquo;d encourage you to star the project and check back regularly for updates. It also provides a tremendous initial mechanism to understand the language better and apply it better within your current canvas Power Apps.\nSupport for Pro Code Extensibility A common grumble from more \u0026ldquo;traditional\u0026rdquo; developers is that the Power Platform has nothing to offer them and, instead, that it makes it incredibly difficult to adopt a Power Platform solution at scale. Microsoft is making significant investments in these areas to address these concerns. Thanks in large part to the incorporation of core functionality inherited from the Dynamics CRM application, we already have features such as solutions and automation tools that can help us deploy our solutions programmatically. To help even further on the canvas Power App side of things, Microsoft has now introduced tooling that lets us extract our canvas Power App source code into YAML definitions, which can then be easily added into our source control systems and deployed forward from there. Power Fx forms a core part of this since all formulas and expressions written in our apps will export out in-line. And, since these are YAML based definitions, I\u0026rsquo;m sure all of you young and trendy developers will be more than satisfied with that\u0026hellip; 😏\nWhat about DAX / M (Power Query)? This question has come up fairly regularly, from what I\u0026rsquo;ve seen since the announcement earlier this week. DAX and Power Query (or M) already sit neatly within the Power BI side of Power Platform and as part of features such as data flows within Power Apps. And, as Greg Lindhorst comments, there are no plans to change this at all. So you can be safe in the knowledge that Power Fx will compliment any existing solution you\u0026rsquo;ve built out using these languages and will remain as the cornerstone of Power BI. Indeed, a language such as Power Fx is utterly incapable of ever allowing us to perform data modelling activities, so it would be a downright foolish attempt to try and shoehorn this language into where it doesn\u0026rsquo;t belong.\nThe future\u0026rsquo;s bright. The future\u0026rsquo;s Power Fx In the context of the Power Platform, at least. 😜 Microsoft is very much shaping the language to form the cornerstone of all aspects of the Power Platform. The benefits of this should be readily understandable, ultimately leading to greater consistency across the platform and the ability for us to quickly adapt our solutions across each constituent element of our Power Platform solution. The Power Fx language sets the stage for exciting, future developments in the platform. Finally, it gives us a name we can readily use when describing the canvas Power App formula language. Watch this space, as you will hear about Power Fx more and more in the future!\nWhat are your thoughts on Power Fx? A welcome addition or just another passing craze? Let me know your thoughts in the comments below!\n","date":"2021-03-07T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/whats-the-deal-with-power-fx/","title":"What's the Deal With Power Fx?"},{"content":"Welcome to the eighth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In last week\u0026rsquo;s post, we dove into model-driven Power Apps, reviewing their various capabilities and usage cases within the Power Platform. We can now look to continue our review of Power Apps by moving on to canvas Power Apps and, specifically, the following topics:\nCreate canvas apps\ncreate and configure a canvas app implement complex formulas to manage control events and properties analyze app usage by using App Insights build reusable component libraries Canvas apps differ significantly from model-driven apps, with a completely different authoring experience. Candidates taking the exam should expect to demonstrate that they know the core differences and, most crucially, the usage cases for both kinds of Power Apps. With that in mind, let\u0026rsquo;s start to familiarise ourselves with what they are all about.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nCanvas Apps Overview As we saw in last week\u0026rsquo;s post, a model-driven app fits its particular usage case very well. Namely, if you need to provide a highly focused, data-driven experience without having to concern yourself with fine-tuning the client interface, then a model-driven app is the one for you. However, a traditional shortcoming with them has always been on doing the opposite of their optimal areas. This includes situations for when you need to:\nDeploy a mobile optimised application that works natively on mobile/tablet devices: Although there is a Power Apps mobile/tablet app that supports model-driven apps and the new UI interface is mobile optimised, model-driven apps are not primarily designed for mobile use. They can be severely limiting from an access and usability standpoint, which can hamper their potential usefulness for mobile or field workers. Quickly develop an interface without resorting to code: Features such as the Power Apps Component Framework (PCF) and Web Resources are the only mechanism given to us to make any significant modifications to a model-driven apps interface, and only then via custom code. Connect to external data sources: Model-driven apps, by default, must surface data that resides solely within CDS. Again, it is possible to bring in other data sources using tools such as Virtual Entities, but this typically only surfaces read-only data and will involve a high level of configuration or custom code to deploy successfully. Integrate specific action prompts with automation tools within Power Automate: Although Microsoft Dataverse can trigger Power Automate flows after a particular event, it is impossible to begin their execution via a button press or similar. All of these scenarios, and more, can be tackled successfully using a canvas app. Within this framework, business users and developers alike can quickly build out bespoke applications using a drag and drop interface, therefore enabling their apps to run on any type of device imaginable. Microsoft often compares canvas apps to tools such as Microsoft PowerPoint from a usability perspective, insomuch that it is possible to develop a bespoke app along the same lines as a slide deck. I would more see canvas apps as being the logical evolution of Microsoft Access, as a cloud-first tool for building out bespoke business applications that, much in the same way as Access, be self-contained within a database (in this case, the Dataverse) or instead be utilised to surface data from external sources, such as SQL Server. If you are contemplating a migration away from Access soon, then canvas Power Apps is very much one of those tools I urge you to consider.\nFrom a users standpoint, a completed canvas app can not only be used from a web browser but via Android and iOS devices through a particular app. Once deployed, users can access and work with multiple apps within a controlled experience. The platform will push out any changes you make to a canvas app to users after being published. Also, provided that the app developer has built the app correctly, they can be worked within an offline context and automatically synchronise data back to its online source once a connection is re-established.\nAs well as addressing all of the scenarios that are highlighted in the list earlier, Power Apps also have several other features that can make them advantageous when compared with model-driven apps:\nSupport for a wide range of different input controls, ranging from text fields, sliders, galleries, custom images and even barcode scanning. You can use a powerful expression-based language to trigger specific actions based on various events or situations ocurring within your app. Instant playback capabilities, allowing you to test your app as you make changes. Native integration with several AI-focused features, powered by the capabilities within AI Builder. Extensibility into Azure Application Insights, thereby allowing you to generate your own set of robust telemetry relating to your apps\u0026rsquo; performance and usability. You will often see canvas Apps billed as a tool that \u0026ldquo;citizen developers\u0026rdquo; can leverage to significant effect to prevent the need to invest in developing costly business systems that - you guessed it - would require the services of a \u0026ldquo;proper\u0026rdquo; developer. While this concept may raise some eyebrows and concerns, I do believe that Power Apps can be leveraged effectively by traditional developers of any kind to make our lives a lot easier. For example, a canvas app could easily substitute a scenario where you need to present a bespoke interface within a model-driven app that connects to an external data source. Previously, a Web Resource would be your only route towards achieving this and would not be without its own set of challenges when implementing. In this context, the opportunity that canvas apps provide to make developers job a cakewalk cannot be understated and, if it allows us to adapt to changing business circumstances more readily, all the better.\nCreating a Canvas App The Power Apps maker portal is your go-to destination when creating canvas apps. You can either create a Power App in isolation within your current Dataverse environment or bundle it in as part of an existing solution. I would recommend the latter wherever possible.\nWhen first creating the app, we have several options available to us, illustrated in the screenshot below:\nAs the possibilities demonstrate, we can quickly create an application based on an existing data source. For initial proof of concept or testing situations, this can be invaluable. Also, app developers can either create a Blank app from scratch or by selecting an existing App template, which presents some curated scenarios from Microsoft.\nYou must make an important design decision at this stage regarding the app\u0026rsquo;s layout and whether you wish to tailor it for a phone (i.e. portrait) or tablet (i.e. canvas) layout. It\u0026rsquo;s impossible to override this setting post-app creation, so take care to evaluate what you think will be the potential usage scenarios for your app and select the appropriate setting.\nThe connectors available are, by and large, the same set given to us within Power Automate and Power BI. You should note in particular that specific connectors are marked as Premium. Only users with a paid-for Power Apps license assigned to them or environments with an assigned per-app capacity license will be able to use these connectors.\nWith your chosen data source or app type ready, the canvas app designer window will open and resemble the below:\nI\u0026rsquo;ve numbered on here the most critical areas to be aware of, described in detail below:\nThis main menu groups together the various actions you can carry out for your app, broken down as follows: File: Contains settings such as saving, publishing and sharing your app and working with components such as collections, media or variables. Home: Here, you can add new screens to your app, customise its interface or modify the display/formatting settings of the currently selected component. Insert: This tab contains a list of all the potential component types you can add to your app, which we can quickly insert by clicking the appropriate option. View: Opens up separate tabs where you can view the list of data sources associated with the app, media, collections, variables and advanced settings about the currently selected component. Action: Provides a mechanism to configure expressions for a component based on everyday scenarios. Within this area, you can (from left to right) analyse any issues the app checker has discovered on your app, undo/redo a specific action, test your app, share it with other users or access some of the Power Apps help options. The expression bar displays the formula for the selected setting on the current component you are working with. In this scenario, I have chosen the Fill property for the Screen component, which Power Apps has configured to use the RGB colour for white. Using the dropdown box, it is possible to view (and then modify) any component\u0026rsquo;s property using the Power Apps expression language. We\u0026rsquo;ll be taking a closer look at all of this shortly. Expanding this area allows you to access a tree view, allowing you to view all components within your app, insert new ones, view the list of available data sources, and access advanced tools such as the Monitor. Developers can also access and build out component libraries from this area. This feature allows you to define a reusable set of standard controls that we can quickly reuse across multiple apps. The studio will update the content in this area based on whatever option you\u0026rsquo;ve selected within 4. In the example screenshot, we can see a tree view of the app that demonstrates a three-screen app containing various components on each of its respective screens. Contained within this area is the visual editor, showing us exactly how our canvas app will look when deployed. Components can be selected, moved or deleted with ease from within this interface. Based on the component currently selected, this area will update to present a view of its respective properties. You can modify any of these properties by using the options suggested within the relevant dropdown fields or via an expression. Here, you can adjust the zoom settings of the app within the main window. As we can see, the canvas app designer provides a lot of useful tools and an excellent IDE for building apps from scratch. In the video, later on in this post, we will see how this all works in practice.\nWorking with Expressions \u0026amp; Formulas We\u0026rsquo;ve touched upon expressions already in the previous section. They provide a pervasive and tailorable means of customising an app\u0026rsquo;s behaviour to trigger specific business logic when certain conditions are met. For example, in the earlier screenshot, we could choose to write an expression that adjusts the fill colour based on the time of day - if between 9 AM-5 PM, set it to white, otherwise set it to black. The formula to achieve this would look like this:\nIf(Value(Text(Now(), \u0026ldquo;[$-en-gb]hh\u0026rdquo;)) \u0026gt;= 09 \u0026amp;\u0026amp; Value(Text(Now(), \u0026ldquo;[$-en-gb]hh\u0026rdquo;)) \u0026lt;= 17, RGBA(255, 255, 255, 1), RGBA(0, 0, 0, 1))\nIn this case, I\u0026rsquo;m using the Now() function to get the current time as a text value, which I then convert into a number. The output of this is then utilised twice as part of an If() function to determine the appropriate fill colour to apply. As part of this, we are using standard operators to perform our logical test. The syntax of the Power Apps expression language should present little difficulty to established developers, as it broadly conforms to the most common programming languages. You can best think of them as Excel formulas on steroids, which lends itself nicely to Excel power users exploring Power Apps for the first time.\nIt\u0026rsquo;s impossible to discuss every possible formula and also to learn them for the exam. What follows is a list of some of the more commonly used ones that you will use the most; I would urge you to review the full list and also experiment further with each one at your leisure:\nCount: Returns a number indicating the total amount of records within the table object supplied. Navigate: This lets you move a user to another screen within your app. As part of this, you can specify the type of transition to perform after triggering the action. Back: Working on the same basis as Navigate, this is the most straightforward function to move a user back to the last screen. Patch: This lets you update or create a new record to the data source specified. You should ensure that all required field values for your data source are defined here, to avoid any errors from occurring. SubmitForm: A particular type of function available only via the Forms control, this works on the same basis as Patch by allowing you to save/create data entered into the form. Update: A more limited version of Patch, we can use this function to update an existing record\u0026rsquo;s values in the data source you specify. Upper: Converts the supplied text value to uppercase letters, e.g. test would become TEST. There are equivalent functions available to convert text to lower case and also proper case too. Split: Accepts a string value and allows you to separate its value based on the delimiter you specify. There\u0026rsquo;s a lot of powerful functionality underneath the surface here, which will surely satisfy any developer working with canvas apps.\nDemo: Creating a basic Canvas App In the video below, take a look at the actual process of creating a canvas app, from start to finish:\nWith the capabilities available across both types of Power Apps, it is possible to meet any potential scenario from a business applications standpoint. In particular, canvas apps let you tailor the user interface of your app to suit any need. In the next post, we\u0026rsquo;ll review some of the tools available to help us manage and troubleshoot the various Power Apps we create.\n","date":"2021-02-28T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-working-with-canvas-apps/","title":"Exam PL-400 Revision Notes: Working with Canvas Apps"},{"content":"Welcome to the seventh post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In last week\u0026rsquo;s post, we reviewed the various capabilities within Microsoft Dataverse that you can use to build a robust security and access model. In today\u0026rsquo;s post, we move onto the third exam area - Create and configure Power Apps - which has a 15-20% total weighting, so definitely a subject that you need to have a good handle on! Given that there\u0026rsquo;s so much involved with Power Apps, we\u0026rsquo;re going to split things out and take a look at the first topic initially, all concerning model-driven Power Apps:\nCreate model-driven apps\nconfigure a model-driven app configure forms configure views configure visualizations These types of Power Apps differ significantly, both in terms of their ideal usage scenario and capabilities. Recognising this and being in a position to demonstrate this during the exam will be essential. So, without further ado, let\u0026rsquo;s jump in and see what they are all about!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nModel-Driven Apps If you\u0026rsquo;ve worked previously with Dynamics 365 Customer Engagement (on-premise or online), then you should have little difficulty working with model-driven apps, as they inherit a lot of core functionality previously provided by these applications. Targeted towards data-driven applications that feed directly off Microsoft Dataverse (AKA the Common Data Service), they offer a modern approach to creating a bespoke, targeted business application that exposes the most useful components necessary for a user to complete their job. With a model-driven app, developers can bundle together the following elements:\nSite Map: Whereas previously, developers would have to use the XrmToolBox Sitemap Editor to perform sitemap amends, we can now fully customise the sitemap within the application, using a drag and drop interface. Multiple sitemaps can exist within a Dataverse environment, but we can only scope one to a single model-driven app. This enhanced experience allows us to have bespoke sitemaps for every model-driven app deployed to our environment. To find out more about this new SiteMap customisation experience, the following tutorial article on the Microsoft Docs website shows you how to work with this feature in-depth. Dashboards: It\u0026rsquo;s possible to configure one or several different dashboards that will display for a user when they first load a model-driven app. Business Process Flows: Like Dashboards, we can select one or several different Business Process Flows that users will have access to within the app. Entities: Developers can select as many or as few tables to expose within an app. Then, at a more granular level, it\u0026rsquo;s possible to choose specific components from the following categories: Forms Views Charts Dashboards Also, you can configure model-driven apps with the following top-level properties:\nName Description Icon: Developers can use a default icon or upload a new image for the app that will render when the user is selecting the app from the explorer bar. Welcome Page: If required, an HTML Web Resource document can be uploaded as a welcome page to users when they first launch the app. Enable Mobile Offline: You can enable model-driven apps for offline use by selecting an appropriate Mobile Offline Profile. The screenshot below shows an example of the editor experience for a model-driven app:\nTypically, model-driven apps are best suited for back-office situations when users work off a fixed PC/laptop or need to interact with Microsoft Dataverse rows directly. However, the new unified interface (UI) provides a mobile responsive template that works effectively across any device type.\nMerely setting up a model-driven app or deploying one out will not be sufficient for ensuring that users can access it correctly. As outlined in this article, it will be necessary to grant security role access to all tables within the app and the app itself.\nAs part of your development cycle for a model-driven app, the process of building the app would probably be last in your order of priority. After ensuring you have customised your required tables appropriately, you would proceed next to building out the forms, views, charts and dashboards for your tables. Let\u0026rsquo;s take a look now at how to do this from within the App Designer.\nWorking with Forms Forms are the layer that exposes data for access and modification within a model-driven application. As developers, we can create four types of forms for each table:\nMain: These are the standard forms that users will see and the most common type we will work with. Quick Create: It\u0026rsquo;s sometimes desirable to quickly add new rows to the system without necessarily drilling down first into the table in question. Quick Create forms meet this objective by allowing users to select the + icon at the top of a model-driven app that loads a specialised, condensed form, containing only the values you need to specify on row creation. The example below shows how the Contact Quick Create form renders inside a model-driven app: Quick Create forms do have several limitations, though. For example, you cannot assign them to particular security roles to curtail access to them, and you can\u0026rsquo;t customise their Headers or Footers. However, they do support adding on custom event handlers via JavaScript form functions or Business Rules. Quick View: For situations where you need to expose several information points from a related table, Quick View forms are the best tool to use. They allow you to specify and arrange several columns from a table into a read-only control that you can then add onto a related table form as a reference point. Provided that the related tables lookup control has a valid value, the information will then load through; otherwise, the Quick View form will not render. Quick View forms have the same limitations as Quick Create forms and, because they present a read-only view, do not support custom event handlers or Business Rules. Card: A new feature for unified interface model-driven apps, these form types work in the same manner as Quick View forms but are instead explicitly optimised for viewing within a mobile device. However, unlike Quick View forms, they are instead added to an existing subgrid form as a custom control. For example, in the screenshot below from the Account form editor, we can see that I\u0026rsquo;ve added the Contact Card form onto the Contact subgrid: This control will then render as follows within a UI model-driven app: To find out more about the different form types available and how to work with them, you can consult the following Microsoft Docs article.\nThere are no restrictions over the number of forms we can define for a table. Still, it is generally best practice to build forms to meet specific business areas and then distribute them for access via a security role. However, please take note that you must always have at least one main form for a table defined as the fallback form, without any security role privileges associated with it. This step ensures that a user always has access to at least one form so that you do not impede them when they\u0026rsquo;re working with a table.\nThe actual process of working and modifying forms will, in most cases, take place within the Power Apps form designer, as indicated below:\nThis experience provides numerous benefits, primarily in:\nSurfacing a \u0026ldquo;what you see is what you get\u0026rdquo; editor, allowing you to drag around, re-size and manipulate around components and instantly see how your changes will look across multiple devices. Simplifying the process of adding new columns and components onto a form. Allowing form developers to quickly \u0026ldquo;cut and paste\u0026rdquo; components or columns to new locations on the form. Developers can customise forms using the \u0026ldquo;classic\u0026rdquo; editor if required. This step will be necessary if you are trying to achieve any of the following tasks:\nAssociate a Web Resource to a form or setup JScript event handlers. Insert specialised controls to a form, such as Web Resources, IFrames, Bing Maps etc. Configure any setting relating to the presentation or user experience within the classic interface. However, wherever possible, you should ensure that you use the new form editor experience as, over time, Microsoft will migrate all \u0026ldquo;classic\u0026rdquo; experience functionality over.\nFrom a developers perspective, you will typically work with forms in several different contexts:\nWhen defining and associating a Business Rule to a form. In adding and tailoring event handlers for any custom form functions, which you would typically author in JavaScript or TypeScript. For when you need to bind a Power Apps Component Framework (PCF) control to a field or sub-grid. When rendering custom content via an IFrame or Web Resource. Therefore, having a good awareness of the various properties available as part of the classic form editor (for now at least) will be crucial concerning the exam.\nForm customisation is a broad topic that can take some time to understand fully. Be sure to fully read through the series of Microsoft Docs related to the new form designer and gain a full understanding of the classic form editor too.\nCreating Views Views are the primary mechanism through which multiple rows are\u0026hellip;well\u0026hellip;viewed and interacted with as part of both model-driven applications and canvas apps. Like forms, a table can have numerous types of views defined for it, and, as developers, we can set up as many different types of Public Views that we would like. It\u0026rsquo;s also worth noting that there are several other view types, all of which are created by default when a table is first created and can be modified further if required:\nAdvanced Find View: This defines the columns and sorting behaviour of data that is returned by default (i.e. if the user does not override any of these settings via an Advanced Find search) Associated View: When rendering rows from a related table on a form, the application uses this view by default. Lookup View: The Lookup View appears whenever you search for rows within a lookup field or control. Quick Find View: This is the default view that appears when searching for a row using the Quick Find functionality. Any column within this view that the system or yourself defines as a Find column will also, I believe, be indexed within the database, thereby leading to faster searches when querying this field. Primarily, you will want to use the new view designer when working with views, which I\u0026rsquo;ve illustrated an example of in the screenshot below:\nWithin the view designer, you can:\nAdd on any column from the primary or parent, related table (e.g. add on the name of the Primary Contact from the linked Account row) Adjust the width of a column and its placement within the view. Define as many sortation rules as required for any data returned via the view. Build out the required list of filters to apply to the data before the application returns it to the end-user. Modify the name and description of the view. However, there may be situations where you have to revert to the classic view editor, which looks like this:\nCompared with the updated experience, the potential usage cases for the classic view editor are limited at the time of writing this post. Namely, you will only want to use it when adding on custom icons to a views column properties, via a Web Resource or when adding a PCF control to a view. These, incidentally, will probably be the only situations where a developer needs to interact with and understand views. Some restrictions also exist within the classic editor. For example, you can only specify up to two Sort Order rules within the classic view editor. Therefore, there is no good reason not to use the new view designer wherever possible. Like forms, we expect all missing functionality to migrate across into the refreshed experience eventually.\nThe subject of views typically covers an entire module as part of a customisation course, so it\u0026rsquo;s impossible to discuss them in-depth when it comes to this exam. I would recommend thoroughly reading through all of the Microsoft Docs articles on the subject of views and, in particular, familiarise yourself with the two specific developer scenarios mentioned in the previous paragraph.\nDashboards \u0026amp; Charts The final piece of the model-driven app puzzle - and perhaps most important aspect, from a senior business users perspective - is bundling together additional visualisations and aggregate information into a single layer, ideally quickly accessible from within the model-driven app itself. We\u0026rsquo;ve already seen how to use views to assist in this regard - by presenting a tabular list of data, often sorted and filtered accordingly - but these only form a small part of the overall picture. Firstly, extending further from this is the concept of charts, followed not long after by dashboards.\nCharts Charts allow developers to quickly define a range of standard visualisations that can then be bound to any table view within the application. Set at a table level and accessible from within the maker portal, they are created in the classic interface, as indicated below:\nIn this example, the out of the box Account by Industry chart is displayed, which renders a simple Bar chart that counts up all Account rows in the system, grouped by Industry.\nAs developers, we can create the following chart types for a model-driven app:\nColumn Standard Stacked 100% Stacked Bar: Standard Stacked 100% Stacked Area: Standard Stacked 100% Stacked Line Pie Funnel Tag Doughnut Once you\u0026rsquo;ve decided on the type of chart to use, it will then be necessary to define:\nLegend Entries (Series): As part of this, you will also need to define the data\u0026rsquo;s aggregation type. The following aggregation options are available: Avg Count: All Count:Non-empty Max Min Sum Horizontal (Category) Axis Labels It\u0026rsquo;s possible to specify multiple series/categories, depending on your requirements.\nIn most cases, pretty much all of the standard tables support charts, but it is worth reviewing the precise list of compatible tables before deciding whether to use them or not.\nOnce you\u0026rsquo;ve created a chart, it can be added onto a dashboard (discussed shortly) or rendered alongside an existing Public View via the Show Chart button:\nAlso, for situations where you cannot get the chart rendering or displaying precisely how you want to, developers can choose to export the underlying XML definition for a Chart and modify it accordingly. This option opens up a range of additional options, allowing you to tweak how the Chart ultimately renders within the application. Your best source for discovering the types of things achievable via this method is consulting Ulrik \u0026ldquo;The CRM Chart Guy\u0026rdquo; Carlsson\u0026rsquo;s blog, which is an absolute treasure trove on this subject.\nDashboards Having lots of brilliant views and charts within your model-driven app is all well and good but useless if these remain inaccessible or diffuse from each other. For this reason, dashboards are handy and, from a model-driven app perspective, the critical thing presented to users when first opening a model-driven app. They provide a blank canvas, allowing developers to bring together the components mentioned so far, alongside others, in a highly focused view. Note that these dashboards are not the same as a Power BI Dashboard, although we have options to embed these into our model-driven apps if we prefer to use them instead. There are two types of dashboards we can create - the first, from a table itself, are called Interactive experience dashboards, and you can see the editor for this type of dashboard below:\nFor these dashboard types, you can add on both Charts and Streams, AKA views. You can also define the underlying table view they are bound to and even the period to return rows. In short, Microsoft has designed them to provide a more immediate window into data that may require priority action or review. An ideal usage case for interactive experience dashboards includes a support desk or emergency contact centre dealing with incoming caseloads. As a component tied strictly to a single table, it is impossible to bring in charts or views from any related entity into these types of dashboards.\nIt is also possible to create standard dashboards untied to a specific table. Dashboards of this type have a few more options available to them, as the editor for them attests to:\nAs well as supporting views and charts, we can also add the following components to these types of dashboards:\nWeb Resources: Any Web Resource deployed to the system can be rendered from within a dashboard. Timelines: This is a custom control type, available only within the UI, that lets you see a complete history of previous activities or other notable information relating to customer or contact. You can find out more about them on the Microsoft Docs website. IFrame: You can embed custom content from any web page into a dashboard via this control type. In most cases, standard dashboards will be your preferred choice, given that they support far more options and do not face the same restrictions from a timeframe perspective.\nDevelopers will typically work with dashboards whenever there is a need to pull in content from external sources, using the two-component types referenced earlier - Web Resources or IFrames. Unlike charts, it is impossible to export the definition of a dashboard and modify its visual appearance; therefore, in most cases, you will strictly work with them from within the web interface and then bundle them up within a solution or model-driven app.\nDemo: Creating a Model-Driven App, Form, View, Chart \u0026amp; Dashboard To get a flavour of how to create a model-driven app, from start to finish, check out the video below:\nMany powerful capabilities exist within model-driven apps, often negating the need to develop alternative solutions. Having a good awareness of these topics will help Power Platform developers build an effective solution quickly, without necessarily resorting to code. Tune in next time when we\u0026rsquo;ll be taking a closer look at the other type of Power Apps - canvas apps.\n","date":"2021-02-21T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-working-with-model-driven-power-apps/","title":"Exam PL-400 Revision Notes: Working with Model-Driven Power Apps"},{"content":"Welcome to the sixth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. Last time around, we deep dove into some of the core concepts to grasp around Microsoft Dataverse for the exam, including tables, columns and relationships. Today, we can now move onto the final topic within the Configure Common Data Service area of the exam and focus our attention on learning about the following:\nConfigure security to support development\ntroubleshoot operational security issues create or update security roles and field-level security profiles configure business units and teams One of the arguable benefits of leveraging the Power Platform and, in particular, Microsoft Dataverse (AKA the Common Data Service), is the fantastic security model afforded to us. Using this capability allows us to provide highly specialised access to our businesses data, based on an individuals role or position within an organisation. All of the tools above can help us achieve these objectives, and more, so let\u0026rsquo;s dive and find out more about how to use them within the platform.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nNote on Terminology In late 2020, shortly before the PL-400 exam went into general availability, Microsoft made some significant changes to the core terminology in some areas of the Power Platform, chiefly affecting some of the concepts discussed in today\u0026rsquo;s post. I discussed these changes as part of a blog post last year, but, in a nutshell:\nMicrosoft Dataverse was announced as the new name for what was previously known as the Common Data Service. This change was announced and discussed further by Ryan Cunningham in a blog post last year. Within Microsoft Dataverse, many of the core features were renamed. For example, entities are now called tables. Microsoft has provided an excellent summary table that you can use to refer to the previous and new terms. At the time of writing this post, Microsoft has yet to update the PL-400 exam to reflect the new terminology. Therefore, you should take steps to familiarise yourself with the previous terms, so you won\u0026rsquo;t get stuck when sitting the exam. For this blog post, we will be referring to the new terminology throughout.\nHow Security Works in the Microsoft Dataverse Microsoft Dataverse provides an incredible amount of features to help implement even the most stringent access requirements. As such, developers can leverage this existing functionality to help speed up development and focus on building a workable solution, rather than divert time/effort towards creating a sophisticated security model out of the box. At a high-level, the following features are made available to us from a security standpoint:\nBusiness Units: These act as hierarchical containers for your data, allowing you to restrict access to rows, based on a logical structure. Typically, this may be mapped based on an organisational structure or geographic locations within a global organisation. All user accounts must exist within a single business unit, and there is a requirement that all Dataverse environments have at least one root business unit. For further details on this topic, please refer to the following Microsoft Docs article. Security Roles: The cornerstone of security within the application, Security Roles provide users with the permissions needed to interact with Dataverse data. We\u0026rsquo;ll be taking a much closer look at this feature in the next section. Teams: Although typically used to sub-categorise users within a Business Unit, Teams can be assigned Security Roles, to help you simplify setting privileges in bulk. It is also possible to configure an Access Team template, allowing end-users to grant permissions to a row without any further customisation needed. Find out more about how to manage Teams with the following Microsoft Docs article. To find out more about access-teams and how to set them up, please consult this article instead. Column-Level Security Profiles: Sometimes, it may be necessary to not only restrict a whole table but the contents of a specific column. An excellent example of this involves credit card numbers. Although it is desirable to grant users the ability to enter this information on row creation, restricting access beyond this point would be highly beneficial. Field-level Security Profiles can help meet this requirement and will be discussed in more detail shortly. Row Sharing: All users within the application, provided they\u0026rsquo;ve been assigned the relevant Share privilege at a Security Role level, can grant temporary or indefinite access to rows. This feature can be useful for situations where someone is going away for a few weeks, and you need to provide limited access to a colleague covering this absence. Developers have the flexibility to use one or several of these features when building out their solution. Typically, Security Roles will be the one area you divert your most attention to, particularly when creating new tables.\nDiscussing Security Roles Users accessing Microsoft Dataverse, either via a model-driven app, Power BI or another mechanism, must be assigned a security role. The Security Role defines which tables, features and other components within the Dataverse the user can interact with. By default, all Dataverse instances come equipped with several out of the box roles, indicated below:\nAt the time of writing this post, although it\u0026rsquo;s possible to add Security Roles to a solution within the new Power Apps portal, creation and modification of them take place within the classic interface.\nSecurity role permissions can be broke down into two broad categories - row-level and task-based privileges**.** Also, access level privileges will apply to specific tables. The bullet points below provide a general summary of each of these topics:\nRow-level Privileges: Available for all tables, regardless of their ownership type, these permissions define the specific access privileges that a user can achieve against a table. These, by and large, following a CRUD concept for persisted storage (Create, Read, Update and Delete). Specifically, the full list of privileges are: Create: Allows the user to create new rows for this table. Read: Allows you to read the row entirely, but make no changes. Write: Allows you to modify existing rows of this table. Delete: Let\u0026rsquo;s you delete the table row in question. Append: Allows you to associate the table with another table row. Append To: Allows other table rows to be related to this table row. Assign: Allows you to re-assign the row to another user. Share: Let\u0026rsquo;s you share the row with other users. Task-Based Privileges: These typically allow you to carry out specific, non-table bound actions within the application. For example, you can grant permissions for users to Create Quick Campaigns or Assign manager for a user. Access-level Privileges: Going back to Business Units, the Dataverse security model lets you define whether users can access rows within their Business Unit, in ones directly underneath them or across the entire organisation. These granular level privileges are only available if you have configured a table with an ownership type or User or Team. Tables with Organisational level privileges do not support this and, instead, it is necessary to grant all or zero permissions. The full list of levels available for selection within the application are: Organization: Users allowed this level of access to a table have unrestricted access across the system, regardless of which business unit the row resides within. Parent: Child Business Units: With this privilege level, users can interact with all rows of the table in their current Business Unit or any child Business unit, regardless of depth. Business Units: At this level, users can only interact with rows in their current Business Unit. Depth access does not apply in this scenario. For example, if I\u0026rsquo;m granted Read Business Unit level privilege, I\u0026rsquo;d be unable to read any rows within a child, grandchild etc. business unit. User: The most restrictive level, only rows which I own or shared with me will be accessible if granted this level. Altogether, Security Roles provide a robust, granular and flexible approach to locking down areas of the application or ensuring that users only interact with rows they own, for example. To find out more about Security Roles, please refer to the following Microsoft Docs article. Security Roles is a massive topic in of itself, and impossible to cover in-depth as part of a single blog post.\nColumn-Level Security Profiles We\u0026rsquo;ve already touched upon just what Column-Level Security profiles are, so let\u0026rsquo;s dive into some of the finer points of their functionality.\nA system administrator/customiser must create a Column-Level Security Profile and, in most cases, you will want to include this as part of a solution. The 2-step setup process will involve:\nEnabling the Column Security property: Developers must do this on every column that requires securing. There are no settings that need toggling at the table level first. Certain column types, such as the Primary Key column, are not supported for column security. Update or create a new Column Security profile, defining your required privilege level: A profile will expose all columns enabled in the system for column security, for which you must then specify the following settings: The users or teams that the profile applies to. The actual permissions to grant to the users or teams assigned to the profile. The following permission types are available; it is possible to mix and match these privileges, to suit your requirements: Read Create Update To find out more about setting up column-level security and profiles, you can refer to the following Microsoft Docs article.\nTroubleshooting security issues Typically, the bane of any Power Platform administrators life will be in resolving access or security-related issues, which can be tricky to navigate and fall through the net during any UAT testing. The list below provides a flavour of things to watch out for with security features in the Dataverse:\nReview any generated error message carefully. These will almost always indicate the missing privilege and the affected table, thereby allowing you to modify any assigned security role accordingly. Where possible, try and base any security role off an existing, out of the box role and tailor it accordingly. There is a myriad of minimum privileges required to open a model-driven app in the first place, which can prove challenging to figure out when creating a security role from scratch. A column enabled for column security will only be visible to system administrators within the application if no similar column security profile has been set up and assigned. Consider the impact this may have when enabling this property for this first time. Demo: Understanding Teams, Security Roles and Column Security Profiles in Microsoft Dataverse To help gain a better grasp of the concepts discussed in this post, take a look at the video below, where I will show you how to work with Teams, Security Roles and Column Security Profiles:\nNeglect the security capabilities within Microsoft Dataverse at your peril. They can often enable you to model out even the most complex access requirements, without ever needing to resort to a single line of code. Next time in the series, we\u0026rsquo;ll be moving onto the next focus area of the exam, as we take a look at how to build out a model-driven Power App.\n","date":"2021-02-14T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-implementing-security-within-microsoft-dataverse/","title":"Exam PL-400 Revision Notes: Implementing Security within Microsoft Dataverse"},{"content":"Welcome to the fifth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. We saw how to implement a proper application lifecycle management (ALM) process via Solutions and automation with Azure DevOps in the previous post. Today, we continue with our focus on the Configure Common Data Service area of the exam, by looking at the following topics:\nImplement entities and fields\nconfigure entity and entity options configure fields configure relationships and types of behaviors As part of this next area, we can now focus more closely on the features available within Microsoft Dataverse. This powerful tool will enable us to model out the core data within an organisation straightforwardly. Developers can leverage these capabilities without resorting to code, thereby providing us with a secure, highly scalable, Software as a Service (SaaS) database. Let\u0026rsquo;s dive in and discover how easy it is to get started with the Dataverse!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nNote on Terminology In late 2020, shortly before the PL-400 exam went into general availability, Microsoft made some significant changes to the core terminology in some aspects of the Power Platform, chiefly affecting many of the concepts discussed in today\u0026rsquo;s post. I discussed these changes as part of a blog post last year, but, in a nutshell:\nMicrosoft Dataverse became the new name for what was previously known as the Common Data Service. This change was announced and discussed further by Ryan Cunningham in a blog post last year. Within Microsoft Dataverse, many of the core features were subject to terminology changes. For example, entities are now called tables. Microsoft has provided an excellent summary that you can use to refer to the previous and new terms. At the time of writing this post, Microsoft has yet to update the PL-400 exam to reflect the new terminology. Therefore, you should take steps to familiarise yourself with the previous terms, so you won\u0026rsquo;t get stuck when sitting the exam. For this blog post, we will be referring to the new terminology throughout.\nTables Overview Tables are the core objects within Microsoft Dataverse. From a developers perspective, their name best reflects how these objects look in the back-end database platform; namely, as SQL Server database tables, used to store individual row data for each of our different data types within the system. Many tables, covering common business scenarios and adhering to the Common Data Model, are given to us by default. For example, the Account table contains all the essential information we may store regarding companies that an organisation works with each day. As developers of the system, we can create new tables, to cover bespoke requirements. Also, we have the ability to:\nModify the properties for a table. For example, we can enable a table for SharePoint Online document management functionality. Add new custom columns to a table, to include missing information required by our organisation. Setup or modify existing relationships between tables to, for example, ensure specific column values are mapped across automatically. In short, we have a range of features at our disposal to store any potential type of information within Microsoft Dataverse, allowing us to leverage additional built-in features, where required.\nTable customisations is a topic area that is impossible to cover as part of a single blog post and is typically the focus of 2-3 day courses to grasp fully. The next few sections\u0026rsquo; aim to focus on the core concepts that you will need to have in the back of your mind when tackling the exam.\nWorking with Tables When first contemplating whether to create a new table or not, you must make some critical decisions regarding how to configure it, including:\nName attributes: All tables must have a Display Name, Plural display name and Name (i.e. logical Name) value specified for them. Primary Column Details: All new tables must have a text column defined for them, representing the value shown for each created row within the interface. We have full flexibility to specify the display and logical name value for this column, but we cannot change its underlying data type (Single Line of Text). We can also convert this to an autonumber column - the benefit here is that it will always have a populated value, that will be unique across the table. Attachments/Notes: We can configure tables to work alongside attachments and notes within the application. Take a look at the following blog post to find out more about this feature. Description: You can provide a useful explanation of what the table is here, to better inform others as to its purpose. Table Type \u0026amp; Ownership: Here, we can specify two critical options: Table Type: Specifies whether it is a Standard or an Activity table. Typically, you should select Standard for your table, unless you wish to use it to record a specific activity type (e.g. Home Visit, WhatsApp message etc.). Ownership: As alluded to earlier, this will affect whether the table is subject to more granular access level controls, via the Business Unit hierarchy in the system, or not. In most cases, unless you are sure the table needs to be accessed by everyone within the organisation, you should select the User or team option. Create and Update Settings: Within this area, a table can be configured for use alongside Quick Create forms, enabled for duplicate detection or setup for change tracking. Most of the standard options for tables will be visible within the new Power Apps portal, as indicated below:\nAny other setting not visible here will instead be visible in the classic portal.\nFor existing tables, for the most part, we can carry out the following actions:\nModify their display and plural display name values. Change their descriptions. Enable or disable additional features, such as support for Queues or Feedback. Delete it (custom tables only) However, some system tables may behave differently or have certain features permanently disabled/enabled.\nAlthough the application does not enforce this requirement, I would highly recommend to carry out all table customisations within a solution.\nFor more information regarding tables, you can consult the following Microsoft Docs article.\nColumns Columns represent specific data items to record data. They are also sometimes referred to as attributes and, previously, as fields**.** We can modify or create new columns based on a wide range of different data types available to us. When creating one from scratch, we must specify the following details:\nDisplay \u0026amp; Logical Name Type: e.g. Single Line of Text, DateTime etc. Business Requirement: Here, you can specify whether users must always specify a value for this column before saving the record. By default, this option defaults to Not Required. Searchable: Enabling this option will allow customisers or Advanced Find users to use this column when creating views or searching for data. My understanding is that the application adds any column marked as Searchable to indexes behind the scenes, thereby speeding up any searches performed; I\u0026rsquo;d, therefore, recommended enabling this property when you anticipate frequent querying of any data. Calculated or Rollup: Specifies whether to use the column as a calculated or rollup column type. These types are typically most useful to generate aggregate information or for use within a reporting solution. You can find out how to work with these column types in more detail, by reading through the calculated columns and rollup columns articles on the Microsoft Docs site. Description: Here, you can provide a useful summary of the purpose of the column. Any value saved within this column will then get displayed to users within a model-driven app, whenever they hover their mouse over the column\u0026rsquo;s name. For each column and its corresponding type, we can fine-tune additional details relating to it. The following article summarises some of these properties in further detail. It is impossible to cover all potential scenarios within this blog post, so I would encourage you to experiment with creating all possible column types within a test environment.\nComparing 1:N, N:1 and N:N Relationships When modelling a SQL database, it is desirable to create multiple tables, with any required links implemented via FOREIGN KEY relationships. This type of modelling allows you to create hierarchical relationships if desired and ensure that your solution remains scalable. Microsoft Dataverse leverages the built-in functionalities within SQL Server, by enabling you to define several different types of relationships within the application:\n1:N \u0026amp; N:1: These relationships are virtually the same, and describe either a one to many (1:N) or many to one (N:1). Regardless of their configured direction, they allow you to have a single parent row, with many related rows. An example of a system 1:N relationship is the one between the Account and Contact tables; a single Account can have many associated Contacts. N:N: Many-to-many (N:N) relationships are a bit more unusual, primarily because of the way they can be customised. They describe a situation where you must have many table records related to many other table records. An excellent example of a scenario where this may be useful is if you have Event and Attendee tables set up within the system; many events can have many attendees. Therefore, this would be an appropriate use case for a many-to-many relationship. As mentioned earlier, these types of relationships are an oddity, as you can configure them in one of two different ways: Native N:N: This is where you let the system wholly manage the relationship and its setup for you. The Dataverse will create a hidden intersect table behind the scenes to record all N:N relationship instances. This table will remain inaccessible and cannot be customised further. This is the default and recommended option if you do not need to record additional properties relating to your N:N relationship. Manual N:N: In this scenario, you create the intersect table yourself. Then, you set up 1:N relationships between your two different table to the intersect table. As you have full control over the intersect table, you can customise it to record additional columns. This type of N:N relationship is most suitable for advanced scenarios only. Typically, you would navigate to the Relationships tab within the Power Apps portal to create these. However, you can also create a relationship by adding a column of type Lookup to your table. To find out more about the different types of relationships and, specifically, native and manual N:N relationships, consult the following Microsoft Docs article.\nOnce a relationship has been created, you can also specify additional options concerning column mappings, which allow for data to be quickly copied to new records when created from a primary row. For example, column mappings exist between the Lead and Opportunity tables, meaning that specific columns will automatically copy to the Opportunity table whenever a Lead is qualified. In most cases, you will need to ensure that the source and target column are of the same type and a target column cannot be subject to multiple source mappings. Further details on this feature and how to begin working with it can be found on the following Microsoft Docs article.\nDemo: Working with Tables, Columns and Relationships in Microsoft Dataverse To better understand how to customise Microsoft Dataverse using the concepts highlighted above, check out the below video:\nHaving a good grasp of these data modelling capabilities within the Power Platform is essential. It allows developers to effectively leverage functional capabilities to complement any bespoke solutions we build. In the next post, we will finish off our discussion of the Configure Common Data Service (AKA Microsoft Dataverse) exam area by looking at security roles, field security profiles, business units and teams.\n","date":"2021-02-07T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-modelling-data-using-tables-columns-relationships-in-microsoft-dataverse/","title":"Exam PL-400 Revision Notes: Modelling Data using Tables, Columns \u0026 Relationships in Microsoft Dataverse"},{"content":"It\u0026rsquo;s that time of year to ride the wave once more, and see what\u0026rsquo;s new and upcoming within the Microsoft Business Application space, as part of the 2021 release wave 1 plan! As usual, there is a feast of new functionality to digest; so much that Microsoft has split the content out into two separate sites:\nDynamics 365 - This includes our core set of customer engagement (CE) apps, such as Sales and Service, as well as the various ERP offerings, such as Finance and Human Resources. Power Platform - Namely, Power Apps, Power Automate, Power BI, Power Virtual Agents, and related offerings, such as AI Builder. Regardless of where you sit in the Business Application space, you can no doubt expect several or more investments within your particular application of choice, all of which have a central theme. For example, the latest set of Dynamics 365 Field Service features all aim to help drive more customer-centric experiences, via self-servicing and a refined feedback loop, to ensure that on-site technicians deliver consistent and positive interactions to customers. With so much to digest, I thought I\u0026rsquo;d dip in as part of this blog post and unearth my top five favourite new things coming up. Read on to find out my top picks for the 2021 Release Wave 1!\nNew Dynamics 365 Sales Mobile App The Dynamics 365 mobile/tablet app currently available for both iOS and Android, has seen some significant investments in recent years; to the extent that, appearance-wise, it\u0026rsquo;s miles ahead over what was available previously. However, Microsoft has purposely designed this app to work with all of the various CE apps simultaneously. Even though it provides a consistent experience to what we get in the desktop environment, it\u0026rsquo;s not particularly well-tailored to a specific usage scenario. In particular, salespeople can typically get frustrated by not having the ability to quickly enter core information for the prospects they are working with or arriving prepared for a meeting. To help address these needs and address future needs in this area, Microsoft will be releasing a brand new Dynamics 365 Sales app for iOS and Android. The application is currently available in public preview if you\u0026rsquo;d like to see what\u0026rsquo;s coming up. Find out more about it via the dedicated Microsoft Docs pages devoted to the subject. I\u0026rsquo;d also recommend you check out Dian Taylor\u0026rsquo;s excellent blog post on the subject, as she talks you through some of the gotchas relating to its setup and provides an overview of its core features. I\u0026rsquo;ve taken a quick look at the new app myself, and it\u0026rsquo;s looking great so far!\nPower Automate Macros within Dynamics 365 Customer Service This one looks interesting, although details on it are a little scant at the moment. We\u0026rsquo;ve had the ability to quickly trigger Flows from within our Dynamics 365 CE and model-driven apps for a while now. These are well-suited for situations when your flows can handle your logic or if you need to provide limited user input into a flow (shoutout to Dian Taylor again for showing us how to do this). But, you might need to resort to customisations or bespoke development work to satisfy more complex scenarios. As part of release wave 1, customer service administrators can now define macros that dictate how Power Automate flows are executed in the background. What this means in practice is anyone\u0026rsquo;s guess at this stage. Based on the example scenarios highlighted, this could help empower customer service administrators to do more without going through a costly development cycle. Guess we\u0026rsquo;ll have to wait and see to find out more. 😉\nDynamics 365 Project Service to Dynamics 365 Project Operations Upgrade Pathway I\u0026rsquo;m sure there are many current Dynamics 365 Project Service Automation (PSA) customers eagerly watching the new Dynamics 365 Project Operations app, and quite rightly, asking when they can transition across to this successor product. At least, if your clients are anything like mine, that is. 😅 I can\u0026rsquo;t blame them to be fair. Thanks to features such as Gantt chart views for project tasks, tighter integration alongside the new Project for the web app and a unified, end-to-end integration with Dynamics 365 Finance, Microsoft have made considerable investments to elevate PSA to the next level. Now, we\u0026rsquo;ve received the long-awaited news regarding the route to upgrade, which will start to rolling out across early 2021. If you\u0026rsquo;re an existing PSA customer, then these upgrade will be offered to you free of charge, and you\u0026rsquo;ll be automatically \u0026ldquo;grandfathered\u0026rdquo; across from a billing standpoint. Again, details are a little bare at this juncture concerning the upgrade process, but I\u0026rsquo;m pleased to see that this is finally coming.\nExport PDF Documents from Canvas Apps This next one is very exciting, partly because this requirement comes up so much as part of the projects I\u0026rsquo;m involved with these days. For a short while now, the various Dynamics 365 CE apps have had the ability to generate PDF documents for common record types. This feature is particularly useful in, let\u0026rsquo;s say, a quoting scenario, where you need to quickly generate a consistent template to issue to customers and then streamline the process for sending this out. Similar requirements often emerge when building out a canvas Power App. Generally, the need will be to print off a list of records or a snapshot of a particular page on the app. Previous attempts to do this have often involved the need for a convoluted solution. In one case, from recent memory, I had to direct users into a separate model-driven app to do this. Hardly a great experience and, as a consequence, leads to additional dependencies within your solution. Now, to help get around this, export to PDF functionality will be extended out to include canvas Power Apps as well. App makers can now define document types within their app, with dedicated controls added to then straightforwardly export it into a PDF. Not only does this address a longstanding ask on the Power Apps Community ideas forum, but it also unlocks additional capabilities that will no doubt strengthen the potential capabilities that we can build out in a canvas app.\nPower Automate Connection References in General Availability Application Lifecycle Management (ALM) in Power Automate has been lagging now for several years. Although we do now have \u0026ldquo;solution-aware\u0026rdquo; flows available, to help us package up and deploy our flows to multiple environments, you will still occasionally hit the odd issue as part of deploying out your flows. This is part of the reason why I\u0026rsquo;ve been so actively utilising connection references, despite the fact they\u0026rsquo;re in public preview still. They virtually eliminate the common issues associated with moving flows to alternate environments by allowing you to define a solution component describing the connector your flow and apps are using. When you import a solution containing a connection reference using the maker portal, you specify or create the appropriate connection to be used. Everything then \u0026ldquo;just works\u0026rdquo; and some of the pains associated with deployments are instantly removed. However, as the functionality is still in public preview, it is still generally advised not to use it as part of production deployments. With all this in mind, it\u0026rsquo;s great to see that, as part of the next release wave, this functionality will be moved into general availability, thereby making it \u0026ldquo;safe\u0026rdquo; for use across your various environment deployments. If you haven\u0026rsquo;t already checked out connection references, do yourself a huge favour and start familiarising yourself with them. If only to keep you sane as part of your next solution deployment. 😂\nAs part of this release wave, there\u0026rsquo;s so much to consume that it\u0026rsquo;s impossible to fit it all into a single blog post! What are you most looking forward to? Have I missed anything that you think exceeds the above? Let me know in the comments below!\n","date":"2021-01-31T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/my-top-5-release-wave-1-2021-features-for-dynamics-365-and-the-power-platform/","title":"My Top 5 Release Wave 1 2021 Features for Dynamics 365 and the Power Platform"},{"content":"Dataflows within Power Apps are an impressive tool that has seen a lot of investments by Microsoft recently. Designed to allow you to integrate your data into Microsoft Dataverse using Power Query (M) capability, they are a far cry from the traditional data import experience inherited from Dynamics 365 Online. I first took a look at them in 2019 and, initially, hit a fair few issues when trying to get them to work. Fast forward to 2021 and, not only has the interface seen a revamp, but they now also feel a lot more stable and capable. So, if you find yourself grappling with an elaborate data import routine and you\u0026rsquo;re tempted to consider tools such as Azure Data Factory or Kingswaysoft, then you owe it yourself to at least explore dataflows. Of course, as with anything to do with a continually evolving, Software as a Service (SaaS) offering such as the Power Platform, you may invariably hit a small issue or two as you go into the woods with it. 😉\nCase in point - when recently attempting to import the results of a query into the Dataverse, I kept getting the following error message:\nError Code: Canceled, Error Details: The refresh for this entity was canceled. A task was canceled.\nThe query loaded fine without errors, and there appeared to be nothing amiss with the mappings. Upon further investigation/research, I stumbled upon this forum post dating back to 2019 and the following, suggested answer:\nIf you wish to overcome it immediately you can rename your entities so they include only letters, numbers and spaces (no special characters) and also verify they don\u0026rsquo;t start with a number.\nIn my case, the affected query contained the \u0026amp; symbol in its name. Upon removing this and re-running the dataflow again, it completed successfully.\nAs the forum post seems to suggest, the product team seem to be aware of this issue and are working on a fix\u0026hellip;but, after two years, I\u0026rsquo;m guessing it must be stuck within a pull request somewhere internally. 😏 Hopefully, if you find yourself hitting the same problem now or in the future, you know what you need to do to get things working again.\n","date":"2021-01-24T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/resolving-couldnt-refresh-the-entity-because-of-an-internal-error-error-in-power-apps-dataflows/","title":"Resolving \"Couldn't refresh the entity because of an internal error\" Error in Power Apps Dataflows"},{"content":"Welcome to the fourth post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In last week\u0026rsquo;s post, we finished our discussion concerning the first exam area by reviewing the Power Platform\u0026rsquo;s various extensibility points. With this concluded, we can now move onto the next area of the exam - Configure Common Data Service - which has a 15-20% total weighting and comprises of the following topics:\nConfigure security to support development\ntroubleshoot operational security issues create or update security roles and field-level security profiles configure business units and teams Implement entities and fields\nconfigure entity and entity options configure fields configure relationships and types of behaviors Implement application lifecycle management (ALM)\ncreate solutions and manage solution components import and export solutions manage solution dependencies create a package for deployment automate deployments implement source control for projects including solutions and code assets Rather than address these in order, we\u0026rsquo;ll approach the last subject - implement application lifecycle management (ALM) - first, because solutions are the first thing a developer will typically create. Read on to find out more about solutions and how you can use them to manage and deploy the work you build out in the Power Platform!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nSolutions Overview Solutions are almost certainly a mandatory requirement when building any customisations involving the Power Platform and Microsoft Dataverse (AKA the Common Data Service). As well as being a container for all the custom components we develop, they are also useful in:\nUniquely identifying your components compared to other developers, projects or external functionality, when used in conjunction with a Solution publisher/prefix. Providing a precise and controlled mechanism for deploying out changes across multiple environments, either in their entirety or via patches. Enabling you to work with a subset of components Let\u0026rsquo;s dive now into the central topics of Solutions that developers need to grasp as a minimum.\nManaged versus Unmanaged Solutions There are two types of solutions that the platform supports - managed and unmanaged. By default, all solutions exist in an unmanaged state upon creation; only by exporting a solution for the first time do you have the option of making it managed. The key differences between both types of solutions are:\nUnmanaged Solutions of this type are removable from an environment, but all underlying components/changes will remain; these will have to be removed or reverted manually. Developers can freely modify components within an unmanaged solution. Recommended for exporting solutions to other development/testing environments or for storing as part of a source control system (e.g. Git) Managed Solutions of this type are removable from an environment. This action will permanently delete all underlying components, including any table data. Typically, other developers will be unable to modify elements within a managed solution, unless we\u0026rsquo;ve enabled this via each components managed property. When importing an updated managed solution to an environment, the import operation will overwrite all existing components within the current managed solution. Recommended when deploying a final, thoroughly tested solution into a production environment. Developers should always be mindful of how the platform exposes out unmanaged/managed components as part of solution layering. The following Microsoft Docs article provides an excellent summary and visualisations explaining how Microsoft Dataverse applies solutions within an environment. You can also use the solution layering feature to inspect how components have been affected by multiple solutions.\nSolution Publishers \u0026amp; Prefixes When creating a solution, developers should always specify a solution publisher. Doing so helps to identify components belonging to a particular organisation or business area and avoids a bad practice situation where customisations are prefixed using new_. It\u0026rsquo;s possible to define a new publisher and prefix when creating a solution, providing we can supply the following details:\nDisplay Name Name Prefix Option Value Prefix As a rule of thumb, generally, a single publisher/prefix is sufficient for one organisation. However, you may choose to sub-categorise further via business area/function (e.g. Contoso Marketing, Contoso Sales etc.)\nDemo: Creating and Deploying a Managed Solution To understand how to create, add components to and deploy a solution within a managed state, watch the video below to view the steps involved:\nSolution Patches There may be one or several business-critical changes in specific scenarios that need to be rushed out, without necessitating a full update of all components within a solution. Solution Patches provide a mechanism to deploy out small modifications within an overall solution, targeting only the segments that require changing. This action creates a new, separate solution, containing only the underlying components that need pushing out. Later on, all patched solutions can be \u0026ldquo;rolled up\u0026rdquo; into the master solution as part of a regular update or upgrade.\nI\u0026rsquo;ve blogged previously on working with solution patches; although these steps refer to the classic interface, they will be mostly the same within the new Power Apps experience. The following Microsoft Docs article provides more up to date information regarding their functionality.\nUnderstanding the Solution Packager Tool Developers today will be well used to working with a source control/version control system (VCS) as part of their daily work to manage changes across the software solutions they build-out. By and large, the most popular of these tools today is Git, but others are available as well, such as Team Foundation Version Control (TFVC). All of them help to promote better practices when it comes to developing software and, wherever possible, should be a primary consideration when starting a project.\nThe great thing about working with the Power Platform is that we can support this objective. Developers can take an exported solution file and expand the raw contents of this, into a logical, hierarchical format, for storage within your source control provider of choice. The vast majority of customisations we perform to the platform are defined within XML files, allowing for specific changes to tables, forms or columns to be tracked as we commit changes. Below is an example of one of these files, the Solution.xml file, which records details about a solution, its name, version, publisher and other details:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;ImportExportXml version=\u0026#34;9.1.0.27031\u0026#34; SolutionPackageVersion=\u0026#34;9.1\u0026#34; languagecode=\u0026#34;1033\u0026#34; generatedBy=\u0026#34;CrmLive\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34;\u0026gt; \u0026lt;SolutionManifest\u0026gt; \u0026lt;UniqueName\u0026gt;PL400DemoSolution\u0026lt;/UniqueName\u0026gt; \u0026lt;LocalizedNames\u0026gt; \u0026lt;LocalizedName description=\u0026#34;PL-400 Demo Solution\u0026#34; languagecode=\u0026#34;1033\u0026#34; /\u0026gt; \u0026lt;/LocalizedNames\u0026gt; \u0026lt;Descriptions /\u0026gt; \u0026lt;Version\u0026gt;1.0.0.2\u0026lt;/Version\u0026gt; \u0026lt;Managed\u0026gt;0\u0026lt;/Managed\u0026gt; \u0026lt;Publisher\u0026gt; \u0026lt;UniqueName\u0026gt;contosomanufacturingltd\u0026lt;/UniqueName\u0026gt; \u0026lt;LocalizedNames\u0026gt; \u0026lt;LocalizedName description=\u0026#34;Contoso Manufacturing Ltd.\u0026#34; languagecode=\u0026#34;1033\u0026#34; /\u0026gt; \u0026lt;/LocalizedNames\u0026gt; \u0026lt;Descriptions\u0026gt; \u0026lt;Description description=\u0026#34;Publisher for Contoso\u0026#34; languagecode=\u0026#34;1033\u0026#34; /\u0026gt; \u0026lt;/Descriptions\u0026gt; \u0026lt;EMailAddress xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/EMailAddress\u0026gt; \u0026lt;SupportingWebsiteUrl xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/SupportingWebsiteUrl\u0026gt; \u0026lt;CustomizationPrefix\u0026gt;con\u0026lt;/CustomizationPrefix\u0026gt; \u0026lt;CustomizationOptionValuePrefix\u0026gt;18138\u0026lt;/CustomizationOptionValuePrefix\u0026gt; \u0026lt;Addresses\u0026gt; \u0026lt;Address\u0026gt; \u0026lt;AddressNumber\u0026gt;1\u0026lt;/AddressNumber\u0026gt; \u0026lt;AddressTypeCode\u0026gt;1\u0026lt;/AddressTypeCode\u0026gt; \u0026lt;City xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/City\u0026gt; \u0026lt;County xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/County\u0026gt; \u0026lt;Country xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Country\u0026gt; \u0026lt;Fax xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Fax\u0026gt; \u0026lt;FreightTermsCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/FreightTermsCode\u0026gt; \u0026lt;ImportSequenceNumber xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/ImportSequenceNumber\u0026gt; \u0026lt;Latitude xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Latitude\u0026gt; \u0026lt;Line1 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line1\u0026gt; \u0026lt;Line2 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line2\u0026gt; \u0026lt;Line3 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line3\u0026gt; \u0026lt;Longitude xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Longitude\u0026gt; \u0026lt;Name xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Name\u0026gt; \u0026lt;PostalCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PostalCode\u0026gt; \u0026lt;PostOfficeBox xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PostOfficeBox\u0026gt; \u0026lt;PrimaryContactName xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PrimaryContactName\u0026gt; \u0026lt;ShippingMethodCode\u0026gt;1\u0026lt;/ShippingMethodCode\u0026gt; \u0026lt;StateOrProvince xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/StateOrProvince\u0026gt; \u0026lt;Telephone1 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone1\u0026gt; \u0026lt;Telephone2 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone2\u0026gt; \u0026lt;Telephone3 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone3\u0026gt; \u0026lt;TimeZoneRuleVersionNumber xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/TimeZoneRuleVersionNumber\u0026gt; \u0026lt;UPSZone xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UPSZone\u0026gt; \u0026lt;UTCOffset xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UTCOffset\u0026gt; \u0026lt;UTCConversionTimeZoneCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UTCConversionTimeZoneCode\u0026gt; \u0026lt;/Address\u0026gt; \u0026lt;Address\u0026gt; \u0026lt;AddressNumber\u0026gt;2\u0026lt;/AddressNumber\u0026gt; \u0026lt;AddressTypeCode\u0026gt;1\u0026lt;/AddressTypeCode\u0026gt; \u0026lt;City xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/City\u0026gt; \u0026lt;County xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/County\u0026gt; \u0026lt;Country xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Country\u0026gt; \u0026lt;Fax xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Fax\u0026gt; \u0026lt;FreightTermsCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/FreightTermsCode\u0026gt; \u0026lt;ImportSequenceNumber xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/ImportSequenceNumber\u0026gt; \u0026lt;Latitude xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Latitude\u0026gt; \u0026lt;Line1 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line1\u0026gt; \u0026lt;Line2 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line2\u0026gt; \u0026lt;Line3 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Line3\u0026gt; \u0026lt;Longitude xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Longitude\u0026gt; \u0026lt;Name xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Name\u0026gt; \u0026lt;PostalCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PostalCode\u0026gt; \u0026lt;PostOfficeBox xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PostOfficeBox\u0026gt; \u0026lt;PrimaryContactName xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/PrimaryContactName\u0026gt; \u0026lt;ShippingMethodCode\u0026gt;1\u0026lt;/ShippingMethodCode\u0026gt; \u0026lt;StateOrProvince xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/StateOrProvince\u0026gt; \u0026lt;Telephone1 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone1\u0026gt; \u0026lt;Telephone2 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone2\u0026gt; \u0026lt;Telephone3 xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Telephone3\u0026gt; \u0026lt;TimeZoneRuleVersionNumber xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/TimeZoneRuleVersionNumber\u0026gt; \u0026lt;UPSZone xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UPSZone\u0026gt; \u0026lt;UTCOffset xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UTCOffset\u0026gt; \u0026lt;UTCConversionTimeZoneCode xsi:nil=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/UTCConversionTimeZoneCode\u0026gt; \u0026lt;/Address\u0026gt; \u0026lt;/Addresses\u0026gt; \u0026lt;/Publisher\u0026gt; \u0026lt;RootComponents\u0026gt; \u0026lt;RootComponent type=\u0026#34;1\u0026#34; schemaName=\u0026#34;account\u0026#34; behavior=\u0026#34;2\u0026#34; /\u0026gt; \u0026lt;RootComponent type=\u0026#34;1\u0026#34; schemaName=\u0026#34;contact\u0026#34; behavior=\u0026#34;1\u0026#34; /\u0026gt; \u0026lt;RootComponent type=\u0026#34;1\u0026#34; schemaName=\u0026#34;task\u0026#34; behavior=\u0026#34;1\u0026#34; /\u0026gt; \u0026lt;RootComponent type=\u0026#34;60\u0026#34; id=\u0026#34;{8448b78f-8f42-454e-8e2a-f8196b0419af}\u0026#34; /\u0026gt; \u0026lt;/RootComponents\u0026gt; \u0026lt;MissingDependencies\u0026gt; \u0026lt;MissingDependency\u0026gt; \u0026lt;Required key=\u0026#34;0\u0026#34; type=\u0026#34;60\u0026#34; displayName=\u0026#34;Account\u0026#34; parentDisplayName=\u0026#34;Account\u0026#34; solution=\u0026#34;msdynce_AppCommon (9.0.4.0066)\u0026#34; id=\u0026#34;{8448b78f-8f42-454e-8e2a-f8196b0419af}\u0026#34; /\u0026gt; \u0026lt;Dependent key=\u0026#34;0\u0026#34; type=\u0026#34;60\u0026#34; displayName=\u0026#34;Account\u0026#34; parentDisplayName=\u0026#34;Account\u0026#34; id=\u0026#34;{8448b78f-8f42-454e-8e2a-f8196b0419af}\u0026#34; /\u0026gt; \u0026lt;/MissingDependency\u0026gt; \u0026lt;/MissingDependencies\u0026gt; \u0026lt;/SolutionManifest\u0026gt; \u0026lt;/ImportExportXml\u0026gt; To help us extract these files, we can turn to the Solution Packager tool, included as part of the Microsoft.CrmSdk.CoreTools NuGet package. This command-line utility allows us to manually or programmatically automate the unpacking and repackage of solutions and is straightforward to work with. For example, the script below would unpack the solution file PL400DemoSolution_1_0_0_2.zip into a custom folder path we specify:\nSolutionPackager.exe /action:Extract /zipfile:\u0026ldquo;D:\\PL-400\\Solution Packager Demo\\PL400DemoSolution_1_0_0_2.zip\u0026rdquo; /folder \u0026ldquo;D:\\PL-400\\Solution Packager Demo\\PL400DemoSolution\u0026rdquo;\nHere\u0026rsquo;s an example of how this would look within an Azure DevOps Git repository:\nThe Solution Packager is a tool that, as developers, we\u0026rsquo;ll invariably end up using at some stage, so having a good grasp of what it can do and the various options it supports will hold you in good stead for the exam.\nDemo: Working with the Solution Packager Tool To see how to install and use the Solution Packager tool, check out the video below, where I take you through each step:\nAutomating Deployments Using Azure DevOps Running the Solution Packager tool on each occasion you make a change to your solution(s) will, over time, become a tedious affair. That\u0026rsquo;s why your primary objective should always be to achieve a degree of automation if and when you use it. Fortunately, using the capabilities built-in as part of Azure Pipelines makes this easy to do. And it\u0026rsquo;s one of the scenarios that Microsoft anticipates us to deal with as part of our daily work, meaning that we have tooling at our disposal to negate the need to author custom scripts (for the most part).\nTo begin automating the extraction and deployment of your solutions to other environments, you should, first of all, check out the Microsoft Power Platform Build Tools for Azure DevOps, a set of handy pipeline tasks available from the Visual Studio Marketplace. Within here, there are numerous different tasks exposed, that allow us to perform operations such as:\nModify the version of a solution Extract, and then unpack, the contents of a solution file à la the example above. Validate the quality of your solution using the Solution Checker. Create, remove, backup or copy an existing environment. When using any of the tasks within these build tools, it is vital to call the Power Platform Tool Installer task first. Doing so will allow for all required dependencies to be downloaded onto the build agent as your pipeline runs. This tool\u0026rsquo;s actions are callable in any manner you see fit, from both a build or a release pipeline. My general recommendation is to ensure you author your pipelines using YAML wherever possible. Doing so will allow you to track changes most efficiently, and unlock additional capability, at the expense of having to author out your build definitions manually. With powerful scheduling capabilities and the ability to trigger actions based on approvals, both the Microsoft Power Platform Build tools, and Azure DevOps can help you work smarter, and take out the stress and - sometimes - uncertainty when working with the Power Platform at scale.\nAlthough Microsoft expects us to have a good awareness of what we\u0026rsquo;ve spoken about above for PL-400, attempting to cover complicated subjects such as authoring build and release pipelines is impossible as part of a single blog post, and not relevant for the exam. If you\u0026rsquo;re interested in learning more about these subjects, then check out the following learning paths from the Microsoft Learn site:\nDeploy applications with Azure DevOps Automate your deployments with Azure DevOps Demo: Extracting and Deploying Solutions using Azure DevOps To help you better understand how to work with Azure DevOps to achieve ALM within the Power Platform, check out the video below. In it, I\u0026rsquo;ll show you how to build and release a solution using Azure DevOps and the Microsoft Power Platform Build Tools for Azure DevOps:\nALM within the Power Platform is a central component that could throw the wheels off your adoption in the long-term if not considered and implemented carefully. I hope this post has provided a good introduction and justifications behind its usage. Next time around, we\u0026rsquo;re going to see how we can customise tables within Microsoft Dataverse. See you then!\n","date":"2021-01-17T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-using-solutions-to-implement-application-lifecycle-management-alm-capabilities/","title":"Exam PL-400 Revision Notes: Using Solutions to implement Application Lifecycle Management (ALM) Capabilities"},{"content":"Welcome to the third post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In last week\u0026rsquo;s post, we provided an overview of some of the core features and functionality to consider when designing solution components within the Power Platform. Today, we\u0026rsquo;re going to finish up our review of the first area of the exam - Create a technical design - by looking at the final set of topics in this section, namely:\nDescribe Power Platform extensibility points\ndescribe Power Virtual Agents extensibility points including Bot Framework skills and Power Automate flows describe Power BI extensibility points including Power BI APIs, custom visuals, and embedding Power BI apps in websites and other applications describe Power Apps portal extensibility points including CRUD APIs and custom styling As we\u0026rsquo;ve alluded to so far in previous posts, the great advantage of the Power Platform is that you have a collection of applications that have powerful native capabilities, buttressed by strong extensibility points - both in themselves and into other tools. The purpose of today\u0026rsquo;s post will be to introduce these concepts at a high-level, without necessarily diving into too much detail on each. This is primarily due to the comparatively low weighting of these topics in the exam overall. Let\u0026rsquo;s get started! 😀\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nBot Framework Skills With the proliferation of AI chatbots across the web, there are likely situations where someone has constructed an existing scenario to meet your needs. Specifically, it could be that your organisation has previously worked with the Bot Framework and Azure Bot Service, and you have several such bots that you would like to leverage from within the Power Platform. Or it could be that you want to experiment with one of the several, available Bot Framework Solution examples that Microsoft has developed, which can help address existing scenarios or inspire you to build out one of your own. For all of these situations, Bot Framework Skills allows you to blend in a bespoke developed solution so that you can leverage it from within Power Virtual Agents, much in the same way you would with any other action step. There are a couple of things you need to do within Power Virtual Agents to get started with Bot Framework Skills:\nFirst, set up your bot within Power Virtual Agents. If leveraging a custom authored Bot, ensure you\u0026rsquo;ve deployed this out onto Microsoft Azure, either via the web interface or using CLI tools. Register the Bot Framework Skills within Power Virtual Agents. Navigate to your relevant Topic within Power Virtual Agents and then add a Call an action step that references your newly deployed Bot Framework Skill. It\u0026rsquo;s as simple as that, from a configuration standpoint at least. Typically, the process for authoring a Bot will be far more involved and require detailed study to understand fully. But we, thankfully, don\u0026rsquo;t need to worry about this for the PL-400 exam. 😅\nUsing Power Automate Flows with Power Virtual Agents At the start of this series, we referred to a specific type of integration between Power Virtual Agents and Power Automate:\nCall a Power Automate flow from a Power Virtual Agent, to return information from an on-premise Oracle database. This type of action is straightforward to incorporate within Power Virtual Agents, by merely calling a new action and creating a Flow that is then linked back to your topic. What you then do with this Flow is entirely up to you. To help you get the most out of this type of extension, you can configure various input/output parameters to communicate data between Power Automate and Power Virtual Agent. You will often need to ensure that you create your Flow, from scratch, within Power Virtual Agent; at the time of writing this post, adding an existing Flow into Power Virtual Agents can cause communication issues between both systems. Also, keep in mind that any Flow you create will reside within the Default Solution once made. Be sure to follow the correct steps to get this added into a Solution, to ensure portability and best practice for your work.\nPower BI API Overview Time for a gear change as we jump across to Power BI now! Generally, you don\u0026rsquo;t need to concern yourself too much with detailed information regarding this fantastic business intelligence tool for the exam. However, one of the core concepts that you need to grapple is the capabilities on offer as part of the Power BI REST API. To summarise, the key capabilities on offer here are all designed to assist in managing, configuring and, in particular, working with your Power BI solution when embedding it within external applications; more on this shortly. Much like any Microsoft Software as a Service (SaaS) product, to start working with the API, you\u0026rsquo;ll need to register an Azure Active Directory application and then use this to generate an access token into the service. From there, you can start to perform operations such as:\nAdding rows to an existing Power BI dataset Adding, removing or modifying any existing data sources bound to an on-premise gateway. Listing details about various elements within your Power BI tenant, such as dashboards, dataflows or apps Further discussion of the API is beyond the scope of PL-400. Familiarise yourself with the high-level capabilities offered and don\u0026rsquo;t spend too much time brushing up on each operation available.\nCustom Visuals in Power BI If you\u0026rsquo;ve worked with Power BI before, you will be familiar with the various standard visualisation types on offer, such as gauges, area and funnel charts. However, there will be certain situations where you need to express your data differently, and the out of the box visuals just won\u0026rsquo;t cut it. For when this happens, developers can turn to one of two solutions to build custom visualisations. The first is using Node.js and the pbiviz tool. This route offers the most flexibility, but does involve several pre-requisites and requires you to have good knowledge of TypeScript to build out. Secondly, you can use the R programming language, via the R script visual type. Using R simplifies some of the setup involved to build out your custom visual but still requires R installed on your local machine. For situations where you would like external users or organisations to install your visualisation, you can look to get it certified and made available as part of AppSource. Microsoft will only accommodate Node.js visuals as part of this. Custom visuals can help us in our general objective towards ensuring re-usability for our solutions and, much like everything else, should only be explored if no existing visuals, either from Microsoft or AppSource, can meet your requirements.\nEmbedding Options in Power BI As alluded to earlier, the Power BI API becomes particularly relevant in the context of embedding your Power BI solution within an external system. There are typically two approaches to achieving this:\nAnyone with a Power BI professional subscription can look to embed a Power BI Report within various internal business systems. Users will typically need to authenticate discreetly into your Power BI tenant as part of this and require permissions to any underlying report or dataset to interact with the content. There are also options available to embed a report within a public website, but this is generally discouraged unless the underlying data is suitable for public consumption. In short, this route provides the most straightforward and cost-efficient path open when embedding Power BI into other systems. For situations where performance considerations are paramount, and you need fine-tuned control over the embedding experience, Power BI Embedded and it\u0026rsquo;s related components - including the Power BI API - come into the equation. Using this Azure-based service, developers can almost invisibly integrate Power Bi visualisations within their core application, thereby allowing them to focus attention towards their apps core functionality instead of building a reporting suite from scratch. Power BI Embedded is a powerful tool, but requires careful consideration from an architecture standpoint and can also introduce significantly more cost into your solution than the previous option. Before we move on, let\u0026rsquo;s circle back now to why the API and Power BI embedded are so closely intertwined. One of the core requirements for a Power BI Embedded based solution is that your content must reside within a Power BI workspace. The developer building the report must then have an assigned license to deploy this out. As such, we must route any operations we perform to retrieve Power BI Embedded content through the Power BI REST API. You can follow through the available tutorial from Microsoft to understand how this works in practice and the types of core API operations that you\u0026rsquo;d need to perform as part of this. Having a general awareness of how you can utilise Power BI in this fashion is incredibly useful as part of your daily work with the Power Platform. Still, it should only be of minimum focus as part of your revision for PL-400.\nPower Apps Portal API As we round things off in today\u0026rsquo;s post, it\u0026rsquo;s time to take a deep dive look at Power Apps portals, the third type of Power App that allows us to surface out our Microsoft Dataverse data for external users to work with our core business data. This product has been around for several years, under different names. A traditional limitation that developers have found with them is the inability to straightforwardly access common API operations that are available as part of the Microsoft Dataverse Web API. Recognising this need, Microsoft is in the process of releasing a dedicated API that we can access through a Power App portal. The functionality currently on offer allows developers to perform CRUD operations targeting Microsoft Dataverse, alongside the ability to perform entity-relationship associations and disassociations. Developers would construct these as standard HTTP requests, which we can then fire off at various points within a portal. Developers must ensure that the portal\u0026rsquo;s site settings are modified to enable the table and the list of columns that will ultimately be consumed by the API. A portal administrator can also enforce permissions via table permission configurations that already exist. To summarise, the new functionality on offer opens the door for developers to perform complex Microsoft Dataverse operations via code, for situations where other out of the box solutions will not suffice.\nNow, similar to what we highlighted with Canvas Power App Components in last week\u0026rsquo;s post, all of this functionality is currently in public preview. Microsoft will not typically assess candidates on unreleased functionality, so it is unlikely you will be evaluated on this feature now. Nevertheless, you should continue to check the relevant Microsoft Docs documentation to confirm whether Microsoft has released this into general availability. It will become prudent to spend some more time analysing the capabilities on offer here at this juncture.\nCustom Styling within a Power App Portal Styling within a Power App portal occurs via the use of a theme - either an out of the box one provided by Microsoft or one you\u0026rsquo;ve developed yourself using custom CSS files and Bootstrap. The latter option will typically give you the most control, thereby ensuring the enforceability of an organisation\u0026rsquo;s branding and styling requirements within a portal. It\u0026rsquo;s worth noting that a single site on a portal is limited to only a single theme. For situations where you may need to fine-tune a specific page\u0026rsquo;s styling, developers can instead turn to Web Templates and the Liquid templating language to meet this requirement. As well as supporting the ability to render data from the application, it\u0026rsquo;s also possible to apply HTML, CSS or JavaScript snippets, to control the look and feel of individual pages. Using a mixture of these two features, you can start to customise a portal to suit multiple scenarios with ease. In fact, we\u0026rsquo;ve only really touched the surface of what Portals are ultimately capable of. Feel free to explore them further at your leisure but, for PL-400, you can be satisfied that these are the main concepts to be aware of concerning portal styling.\nAnd that brings us to the end of today\u0026rsquo;s post. So far, we\u0026rsquo;ve covered a lot of detailed theory, which is what this area of the exam expects. I hope that the content covered so far has been easy to grasp and contextualised accordingly. In next week\u0026rsquo;s post, you may be pleased to hear that we\u0026rsquo;ll start diving into some \u0026ldquo;hands-on\u0026rdquo; content, as we look at how to work with solutions in Microsoft Dataverse.\n","date":"2021-01-10T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-reviewing-power-platform-extensibility-points/","title":"Exam PL-400 Revision Notes: Reviewing Power Platform Extensibility Points"},{"content":"Welcome to the second post in my series focused on providing a set of revision notes for the PL-400: Microsoft Power Platform Developer exam. In last week\u0026rsquo;s post, we discussed the high-level concepts you need to successfully grasp to build a technical architecture within the Power Platform. This topic sits within the Create a specialised design area of the exam, which has a 10-15% weighting and also comprises of the following, second topic, which will be the focus for today\u0026rsquo;s post:\nDesign solution components\ndesign a data model design Power Apps reusable components design custom connectors design server-side components Here, we start to look at some of the more technical aspects of the Power Platform, which can often form the foundation of a more complex solution that can limit the amount of actual code we need to write. However, it can be tricky to determine what particular technical features Microsoft refers to as part of the blurb above; so let\u0026rsquo;s dive in and try and make some sense of it all. 😀\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform if you want to do well in this exam.\nData Model Design Fundamentals A common pitfall for any developer working with applications, like the Power Platform, is to miss the bleeding obvious; namely, by ignoring features, such as Business Rules or Power Automate flows, that reduce the need for custom code. Related to this, we can often overlook many of the capabilities built within Microsoft Dataverse, that provide neat capabilities well-suited towards modelling out business data in an effective way. When designing and implementing any data model using the Dataverse, you should:\nCarefully review the entire list of tables within the Common Data Model. Determine, as part of this, whether an existing table is available that captures all of the information types you need, can be customised to include additional, minor details or whether a brand new table will be necessary instead. Consider the different types of table ownership options and how this relates to your security model. For example, if you need to restrict rows to specific users or business units, ensure that you configure the table for User or team-owned ownership. You can review further details on these options here. Review the differences between a standard and an Activity table, and choose the correct option, based on your requirements. For example, when setting up a table recording individual WhatsApp messages to customers, use the Activity table type. Digest and fully understand the list of different field types available for creation. Ensure as part of this that you select the most appropriate data types for any new fields and factor in any potential reporting requirements as part of this. Understand the fundamental concepts around table relationships. For the exam, Microsoft expects you to tell the difference between 1:N and N:N relationships, including the differences between native and manual N:N relationships. Be familiar with using Microsoft Visio tools and, particularly, crow\u0026rsquo;s foot notation diagrams, to help plan and visualise your proposed data model. These tips provide just a flavour of some of the things to consider when designing your Dataverse data model. Future posts will dive into technical topics relating to this, which should ultimately factor back into your thinking when architecting a solution.\nDesigning Reusable Components Arguably, one of the benefits of leveraging a solution like the Power Platform is its ability to pull together solutions that can be adapted quickly to multiple scenarios. This introduces several advantages for developers and ultimately allows for our labour\u0026rsquo;s fruits to become instantly adaptable to suit different organisations, industries or business units. Specifically, from a developers standpoint, we can look to leverage the following features in support of this objective:\nCanvas Power App Components: As we build out our canvas apps, we often fuse various individual controls to form common, repeatable groups, that we then wish to use across our app multiple times. For example, we could generate a custom ribbon for our application, comprised of a shape, label and picture control. However, merely grouping it doesn\u0026rsquo;t make it possible to export this out and use it within different apps. In this situation, components come to the fore, by facilitating this capability and allowing us to build low-code, extensible components that we can create once, and deploy multiple times. Components also support a particular formula - OnReset - that does what it says on the tin; namely, returns it to its default state. OnReset is particularly useful if you wish to perform a calculation, based on input from the main app itself. Components should be your first port of call when you are developing multiple apps, and you want to streamline your development process. However, they are limited in scope - they don\u0026rsquo;t, for example, support model-driven Power Apps. Note also that Microsoft still (at the time of writing this post) lists this feature as being in public preview, meaning it\u0026rsquo;s unlikely for you to receive a potential question or scenario relating to it. However, this will likely change, as Microsoft continually refreshes the exam content and as this feature moves into general availability. Power Apps Component Framework (PCF) Controls: For when you need to go beyond components or have a need to develop a reusable control that supports both types of Power Apps, PCF Controls represent the next logical solution for you to consider. Written using TypeScript and Node.Js / NPM, they allow developers to implement highly customisable controls, built for and optimised for the modern web. If you\u0026rsquo;ve come from a Dynamics CRM/365 Customer Engagement background, we can best think of them as the \u0026ldquo;new\u0026rdquo; way of developing solutions traditionally suited for HTML / CSS / JavaScript Web Resources. Boasting a streamlined development experience and support for both types of Power Apps (canvas apps are in preview, as of 3rd January 2021), they allow programmers to completely alter how a field, control or table behaves within an app. We will focus on PCF Controls as part of an entire post later on in this blog series. Getting into the mindset of reducing the amount of time it takes to deploy a solution, by first ensuring that your solution is ultimately reusable, is an essential concept for any Power Platform developer to grasp and should always be the key objective of your daily work. Use the above tools wisely to support this objective and don\u0026rsquo;t always resort to PCF controls, if components or another solution will do the job.\nCustom Connectors Overview As we touched upon in last week\u0026rsquo;s post, an arguable benefit of adopting canvas Power Apps is that they are ultimately agnostic when it comes to the data sources you wish to connect to. Indeed, as we saw, it\u0026rsquo;s possible to connect up various cloud or on-premise applications, covering both Microsoft and third-party vendors. However, consider the following two scenarios:\nYour organisation has a legacy, on-premise API, that you need to communicate with from the Power Platform. This API is complex, using technology such as SOAP. You need to find a way of securely exposing this out for access into the Power Platform and allow others in the organisation to authenticate and access the API\u0026rsquo;s core functionality straightforwardly. You are an ISV Developer with a bespoke API you\u0026rsquo;ve built out. You want to allow your customers to interact quickly with your solution without needing detailed knowledge of how to construct Web API queries or an understanding of concepts such as OData. In both of these situations, we may struggle to identify a suitable, default connector on offer that allows us to meet the core objectives - namely, providing a straightforward and familiar way for Power Platform users to work with the API within their apps and flows. Enter stage right custom connectors. Using these, developers can import the definition of their APIs from either a custom wizard, a Postman collection or an OpenAPI definition, define the various authentication and capabilities of the API, and then share it out for users to start working with. ISV developers can then go a step further by getting their connector certified by Microsoft, allowing them to publish this onto AppSource for anyone worldwide to start using. Custom connectors are ideal if you anticipate multiple users needing to interact with a single API across the Power Platform, as they reduce the complexity involved in interacting with API\u0026rsquo;s. When used appropriately, custom connectors can speed up delivering multiple solutions and ensure that we are making our work infinitely reusable within an organisation. Again, custom connectors will be a crucial focus later on in the blog series, so let\u0026rsquo;s not get too lost in the woods with them right now.\nWhat are Server-Side Components? As a developer, it may be desirable for us to ensure we are building components that reside and execute server, as opposed to client, side. This desire is particularly true in the context of Microsoft Dataverse, as there will be situations where we need to validate, reject, override or approve specific user actions automatically, to ensure that we can always follow our desired business processes. Typically, components of this nature will execute without the user necessarily being aware of what\u0026rsquo;s going on, but this is not always the case (e.g. we may want to return a custom error message when a user violates a business condition). When building our solutions on top of the Power Platform and, specifically, Microsoft Dataverse, it\u0026rsquo;s prudent to make ourselves familiar with the following components:\nBusiness Rules: Although more traditionally targeted towards client-side validation, we can also configure Business Rules to execute at the table (i.e. server) level. This will allow us to, for example, set the value of a field if a condition is met, without needing to refer to a Power Automate flow, classic workflow or another tool. Our logic will always be obeyed, regardless of whether we\u0026rsquo;re using a model-driven or canvas app. Business rules are most useful when you have to model this very simplistic kind of logic, but may start to fall over if you need to, for example, perform calculations or complex logic evaluation targeting multiple tables. Plug-ins: When we find our business logic impossible to map without resorting to code of some kind, plug-ins come to the rescue in allowing us to implement C# or VB.NET based class assemblies, that can interact directly with the platform, either synchronously (i.e. as part of the database transaction) or asynchronously (i.e. as a separate process, after the core database transaction has completed). As we can incorporate any bespoke logic we want, plug-ins offer a high degree of flexibility, within the confines of specific limitations and with the natural expectation of needing experience in the appropriate programming language to implement. Business Process Flows (BPF\u0026rsquo;s): Similar to Business Rules, BPF\u0026rsquo;s will most often be utilised within the context of the user interface and, specifically, a model-driven app. However, it is worth highlighting that the core information relating to a BPF is managed server-side. Details regarding the current process stage, its duration within a stage, and even custom attributes are ultimately stored within the Dataverse. Developers are free to interact with this at any time. They can even alter the progress of a BPF, based on any pre-requisite conditions, thereby ensuring that rows proceed to the appropriate resolution, business area or individual. Real-Time Classic Workflows: As a final consideration (for reasons I will highly shortly), developers can also implement simple or complex workflow automation steps, using a guided interface. This experience is virtually identical to the workflow creation experience within Dynamics CRM / 365 Customer Engagement. It also affords the same benefits - namely, in allowing us to trigger synchronous business logic without using a plug-in. We can also have the best of both the plug-in and real-time workflow world, by implementing a custom workflow assembly to perform more complex operations and - again - achieve a high degree of reusability with a solution. So why do I say we should consider real-time workflows as a last resort? Microsoft has very clearly indicated that we should avoid creating classic, background workflows and use Power Automate flows instead to achieve the same functionality. There has been no official word (yet) regarding the status of real-time workflows. Still, my recommendation is to avoid using them unless they are necessary, and you do not have sufficient C#/VB.NET knowledge to create a plug-in instead. Once more, please don\u0026rsquo;t concern yourself with these complex topics for now, as we\u0026rsquo;ll be returning to them later on. Instead, focus your attention on the potential usage cases, benefits and disadvantages of each component.\nToday, we\u0026rsquo;ve summarised several concepts at a high-level, all of which are useful to consider at the design stage of a Power Platform solution. We\u0026rsquo;ll be deep-diving into many of these in the weeks and months ahead. In the next post in this series, we\u0026rsquo;ll review the various extensibility points within the various Power Platform applications, rounding off our discussion concerning the first exam area in the process.\n","date":"2021-01-03T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-designing-solution-components-within-the-power-platform/","title":"Exam PL-400 Revision Notes: Designing Solution Components within the Power Platform"},{"content":"This one crept under the radar for me, but the PL-400: Microsoft Power Platform Developer exam recently came out of beta and is now - one would hope - in a good state for a broader audience to sit and, all being well, attain the brand new certification aligned towards this. I sat the exam while it was in beta and was rather chuffed and surprised when I got news of my result a few days ago\u0026hellip;\nBeta results for the PL-400 exam have landed, and pleased to say that I\u0026#39;ve passed! This certification is an absolute must for any developer working heavily with the #PowerPlatform. https://t.co/unvCu5WUZP\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) December 24, 2020 With a passing grade secured for this, I now feel (somewhat) more confident to start another revision notes series, to assist those taking the exam in future. So, therefore, I\u0026rsquo;m pleased to welcome you to the first post in this series! As always, we focus our attention on the Skills Measured area of each exam, which is freely available for study by all and sundry. Top of the agenda is the Create a technical design area of the exam, which has a total weighting of 10-15% and the first section of which concerns the following:\nValidate requirements and design technical architecture\ndesign and validate the technical architecture for a solution design authentication and authorization strategy determine whether you can meet requirements with out-of-the-box functionality determine when to use Logic Apps versus Power Automate flows determine when to use serverless computing, plug-ins, or Power Automate determine when to build a virtual entity data source provider and when to use connectors Before we start, some of the more eagle-eyed readers of the blog may start getting a feeling of D__éjà Vu as you read through this post and the ones that follow on. That\u0026rsquo;s because I\u0026rsquo;ve adapted content from my previous series on the now legacy MB-400 exam. In actual fact, a lot of the core content is broadly similar between both exams. There are some crucial differences, particularly around terminology, core focus areas and the introduction of new functionality that wasn\u0026rsquo;t present in late 2019. Rather than fixing the previous posts, I thought it would be better to keep things separate, re-utilise existing content, and refresh it accordingly. I should also highlight that this post and the series aims to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Ideally, your revision should involve a high degree of hands-on testing and familiarity working with the platform. But that\u0026rsquo;s enough jabbering - let\u0026rsquo;s dive into the first topic!\nPower Platform Technical Architecture The Power Platform is Microsoft\u0026rsquo;s low-code, rapid business application development platform, which can help inspire organisations to do more with less, and often forego the need to develop a new software solution from scratch. Within the Power Platform, we have several different independent, yet closely related products, that we can leverage:\nPower Apps: These come in two flavours. Model-driven apps are designed for desktop-based scenarios, where your application needs to sit within the confines of a strict data model. In this respect, you may often hear these types of apps referred to as data-driven applications. If you\u0026rsquo;ve come from a Dynamics CRM / Dynamics 365 background, then you may recognise a lot of the functionality available within model-driven apps. In comparison, Canvas apps are geared towards mobile-first scenarios, providing app developers with a high degree of freedom in designing their apps and deploying them to a wide variety of different devices or alongside other applications within the Power Platform. Canvas apps also have the benefit of being interoperable with a wide variety of data sources. Whether you wish to connect to an on-premise SQL Server instance, other Microsoft solutions such as SharePoint or third-party apps, such as Twitter, connectors are available to perform common Create, Read, Update and Delete (CRUD) operations and more. Power BI: A next-generation Business Intelligence (BI) tool, Power BI provides impressive data modelling, visualisation and deployment capabilities, that enable organisations to understand data from their various business systems better. Despite having its own set of tools and languages, traditional Excel power users should have little difficulty getting to grips with Power BI, thereby allowing them to migrate existing Excel-based reports across with ease. Power Automate: As a tool designed to automate various business processes, Power Automate flows can trigger specific activities based on events from almost any application system. It is a modern and flexible tool that you can use to address various integration requirements. Power Virtual Agents: Many of us will be familiar with the various live chat solutions that we see across different websites that are often operated by one or multiple individuals and help answer commonly asked queries. Power Virtual Agents takes this a step further, by allowing for an automated, always-on bot to reside within your website or Microsoft Teams site, that individuals can then engage with. Developers construct a chatbot using an interactive editor, and can straightforwardly incorporate external integrations without writing any code. Microsoft Dataverse (formerly known as the Common Data Service): The Dataverse provides a \u0026ldquo;no-code\u0026rdquo; environment to create tables, relationships and business logic, to name but a few of its capabilities. Within the Dataverse, Microsoft has standardised the various tables to align with The Common Data Model. This open-source initiative seeks to provide a standard definition of commonly used business data constructs, such as Account or Contact. The diagram below lazily stolen lovingly recycled from Microsoft illustrates all of these various applications and how they work together with other Microsoft services you may be familiar with:\nYou may often hear questions about how Dynamics 365 fits within the Power Platform, the answer to which sometimes raises more questions than it answers. For this exam, you don\u0026rsquo;t need to worry too much about this dimension. However, to briefly summarise, solutions such as Dynamics 365 Sales or Dynamics 365 Service leverage aspects of the Power Platform underneath the hood. For example, both of the previously mentioned solutions use Microsoft Dataverse and model-driven Power Apps.\nUnderstanding how each separate Power Platform application can work in tandem is critical when building an all-encompassing business application. The examples below provide a flavour of how these applications can work together, but the full list would likely cover several pages:\nIncluding a Power Automate flow as part of a Dataverse solution, allowing you then deploy this out to multiple environments with ease. Being able to embed a Power BI tile or Dashboard with a personal dashboard setup in a model-driven Power App. Embedding a canvas-driven Power App into Power BI, allowing users to update data in real-time. Call a Power Automate flow from a Power Virtual Agent, to return information from an on-premise Oracle database. As developers of the platform, Microsofts expects us to know the detailed scenarios that the Power Platform can unlock for organisations and, where possible, identify the most efficient solution to adopt, that may often negate the need for writing custom code.\nHandling Security \u0026amp; Authentication Ensuring that critical business data is subject to reasonable and, where appropriate, elevated access privileges is typically an essential requirement as part of any central business system. The key benefit that the Power Platform brings to the table here is that it uses one of the best identity management platforms available today - Azure Active Directory (AAD). Some of the benefits that AAD can bring to the table include:\nProviding a true single sign-on (SSO) experience across multiple 1st/3rd party applications, backed up by robust administrator controls and auditing capabilities. Allowing full support for user principal or security group level controls, via role-based access controls (RBAC). Access to a wide range of security-minded features, such as Multi-Factor Authentication (MFA), risky sign-in controls and self-service password reset capabilities, should a user account or its associated password be detected as a potential risk. When it comes to managing security or access within the Power Platform, this will differ, based on which application you are working with:\nFor model-driven Power Apps and the Dataverse, you can leverage capabilities such as Business Units or Security Roles, to provide a structured, hierarchical security model. This functionality can be extended further, via features such as field security profiles, thereby allowing you to secure specific table columns in a variety of different ways. Data security within a canvas Power Apps is typically managed by the data source you are connecting with - for example, users connecting to the Dataverse will have any security role privileges applied automatically. For other applications, you may need to consult the relevant documentation and ensure, where possible, you are using SSO to simplify this process. Consider also that developers can share canvas Power Apps to any other user on the tenant, which could inadvertently share privileged access to a particular system. In this scenario, consider what app resources are associated with each canvas Power App and the impact that sharing the app will have. Power Automate flows follow similar sharing principles to canvas apps, allowing you to create team flows that others in the organisation can interact with. Security for 3rd party applications is dictated mainly in the same manner as canvas apps. Finally, Power BI includes several features to help you manage access, such as Workspaces, Apps, or by simply sharing your report/dashboard to another user. Most of these features are only available as part of a paid subscription. Again, the security/privileges of any underlying data source depends upon the account used to authenticate with the underlying data. You may need to resort to Row-level security (RLS) for stringent scenarios to ensure data is restricted accordingly. Typically, a developer will want to design any application to use the Dataverse as the underlying data source for the solution. The security and record restriction features afforded here will more than likely be suitable for most situations.\nComparing Logic Apps to Microsoft Power Automate Flows Confusion can arise when figuring out what Azure Logic Apps are and how they relate to Power Automate. That\u0026rsquo;s because they are almost precisely the same; Power Automate uses Azure Logic Apps underneath the hood and, as such, contains most of the same functionality. Determining the best situation to use one over the other can be a bit of a challenge. The list below summarises the pro/cons of each tool:\nAzure Logic Apps Enterprise-grade authoring, integration and development capabilities. Full support for Azure DevOps or Git source control integration. \u0026ldquo;Pay-as-you-go\u0026rdquo; - only pay for if and when your Logic App executes. Cannot be included in solutions. Must be managed separately in Microsoft Azure. Does not support Office 365 data loss prevention (DLP) policies Target Audience: Developers who are familiar with dissecting structured JSON definitions Power Automate Easy-to-use development experience Can be included within solutions and trigger based on specific events within the Dataverse Supports the same connectors provided within Azure Logic Apps Difficult to configure alongside complex Application Lifecycle Management (ALM) processes. Fixed monthly subscription, with quotas/limits - may be more expensive compared to Logic Apps. Must be developed using the browser/mobile app, with no option to modify underlying code definition. Target Audience: Office 365 power users or low/no-code developers In short, you should always start with Power Automate flows in the first instance. Consider migrating across to Logic Apps if your solution grows in complexity, your flow executes hundreds of time per hour, or you need to look at implementing more stringent ALM processes as part of your development cycles. Fortunately, Microsoft makes it easy to migrate your Power Automate flows to a Logic Apps.\nComparing Serverless Computing to Plug-ins Serverless is one of those buzz words that gets thrown around a lot these days 😀 But it is something worth considering, particularly in the context of the Power Platform. With the recent changes around API limits, it also makes serverless computing - via the Azure Service Bus, for example - a potentially desirable option to reduce the number of API calls made into the Dataverse. The list below summarises the pro/cons of each route:\nServerless Compute Allows developers to build solutions using familiar tools, but leveraging the benefits of Azure. Not subject to any sandbox limitations for code execution. Not ideal when working with non-Azure based services/endpoints. Additional setup and code modifications required to implement. No guarantee of the order of execution for asynchronous plug-ins. Plug-ins Traditional, well-tested functionality, with excellent samples available. Reduces technical complexity of any solution, by ensuring it remains solely within the Dataverse. Full exposure to the underlying database transaction. Impractical for long-running transactions/code. Not scalable and subject to any platform performance/API restrictions. Restricts your ability to integrate with separate, line-of-business (LOB) applications. Comparing Virtual Tables to Connectors The core idea of adopting the Power Platform is to ultimately reduce the number of separate systems within an organisation and, therefore, any complex integration requirements. Unfortunately, this endeavour usually fails in practice and, as system developers, we must, therefore, contemplate two routes to bringing data into the Dataverse:\nVirtual Tables: Available now for several years, this feature allows developers to \u0026ldquo;load\u0026rdquo; external data sources in their entirety and work with them as standard tables. Provided that this external data source is accessible via an OData v4 endpoint, it can be hooked up to straightforwardly; for more complex needs, developers can build a custom data provider, thereby allowing operability with any possible data source. The critical restriction around virtual tables is that all data will be in a read-only state once retrieved, and it is not possible to write or create new records. Virtual Tables now support full CRUD operations, as a consequence of recent updates from Microsoft. You can refer to the following article for details on how to get started on this. Thanks to WB below in the comments for providing the link. Connectors (AKA Data Flows): A newer experience, available from within the Power Apps maker portal, this feature leverages the full capabilities provided by Power Query (M) to allow you to bring in data from a multitude of different sources. As part of this, developers can choose to create tables automatically, map data to an existing table and specify whether to import the data once or continually refresh it in the background. Because any rows loaded are stored within a proper table, there are no restrictions on working with the data. However, this route does require additional setup and familiarity with Power Query. It\u0026rsquo;s also not bi-directional (i.e. any changes to records imported from SQL Server will not synchronise back across). Ultimately, the question you should ask yourself when determining which option to use is, Do I need the ability to create, update or delete records from my external system? If the answer is No, then consider using Virtual Tables.\nHopefully, this first post has familiarised yourself with some of the core concepts around extending the Power Platform. In the next post, we\u0026rsquo;ll be looking at how we can design various components within our Power Platform solution, using the tools on offer from Microsoft.\n","date":"2020-12-27T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/exam-pl-400-revision-notes-designing-a-technical-architecture-for-the-power-platform/","title":"Exam PL-400 Revision Notes: Designing a Technical Architecture for the Power Platform"},{"content":"Some more news relating to Dynamics 365 certification was announced by Microsoft earlier this week, in a move that will (hopefully) address concerns over the current entry-level exam being too broad in function. In this, I\u0026rsquo;m of course referring to the MB-901 Fundamentals exam for Dynamics 365. Within this, candidates must demonstrate general knowledge regarding a whopping 15+ different Dynamics 365 applications that may, in some cases, share the same base functionality, but are vastly different in other respects. The great benefit of a business application suite like Dynamics 365 is its ability to address multiple, interlinked business areas, reducing the friction involved when interacting between organisational units. However, expecting candidates to understand all these applications is frankly ridiculous and not at all close to reality, based on the professionals involved with these products day-to-day. Case in point - despite being a seasoned Dynamics 365 professional, I couldn\u0026rsquo;t tell you the first thing about how the purchase cycle within Dynamics 365 Business Central works. 😅\nSo it\u0026rsquo;s with a large degree of thanks that we can say goodbye to the MB-901 exam, which will be retired on June 30th 2021. In its place, we can now say hello to two, brand new exams, that are, on the face of it, better suited towards introducing people to the Microsoft business application landscape:\nExam MB-910: Microsoft Dynamics 365 Fundamentals Customer Engagement Apps (CRM) Exam MB-920: Microsoft Dynamics 365 Fundamentals Finance and Operations Apps (ERP) Let\u0026rsquo;s now dive into the content of each of the new exams\u0026hellip;\nMB-910: CRM is Back, Baby! Perhaps reflecting the continued temptation to use the older terminology associated with specific products in the Dynamics 365 family, it is gratifying to see a return to the Customer Relationship Management (CRM) acronym. The simple reason? It perhaps most neatly explains to most business users the purposes of the applications referred to within the exam specification. Specifically, Microsoft expects candidates of this exam to demonstrate knowledge of how to use and operate with:\nDynamics 365 Sales (available in both a Professional and Enterprise variant) Dynamics 365 Marketing Dynamics 365 Field Service Dynamics 365 Customer Service Dynamics 365 Project Operations (AKA Project Service Automation) Also, knowledge of some of the core configuration aspects related to all these applications, such as setting up price lists or resource management, are covered as well, alongside working with reporting tools such as Power BI and integrating the application alongside Microsoft Teams, SharePoint Online and Exchange. The various Insight applications (Dynamics 365 Sales Insight, Dynamics 365 Customer Insights etc.) are also covered but would appear to have reduced emphasis compared with MB-901. In summary, therefore, the exam provides a focus towards the Dynamics 365 applications most interwoven with each other. Specifically, all of the above applications sit on top of Microsoft Dataverse (AKA the Common Data Service) and are very tightly interwoven alongside the Power Platform. It, therefore, provides an excellent opportunity for established Power Platform professionals to learn about the range of out of the box and quick to deploy components within the \u0026ldquo;premium\u0026rdquo; Dynamics 365 apps, that can often save you a lot of time and hassle when addressing common business scenarios.\nMB-920: The First Pure ERP Fundamental Exams One of the difficulties behind creating an introduction exam to any Enterprise Resource Planning (ERP) system is that these applications are typically very complex and challenging to understand and implement effectively. This new exam, therefore, will be an intriguing experiment to see whether you can expect candidates to show sufficient knowledge of the following systems:\nDynamics 365 Supply Chain Management Dynamics 365 Finance Dynamics 365 Human Resources Dynamics 365 Commerce Dynamics 365 Fraud Protection Dynamics 365 Business Central It\u0026rsquo;s also worth highlighting that Dynamics 365 Project Operations is also covered on this exam well. This makes sense, given the application leverages a model-driven application sitting on top of Microsoft Dataverse and also capabilities within the Dynamics 365 Finance product as well. For this exam, it appears you only need to worry about learning the Dynamics 365 Finance aspects of the solution, which is a small relief. Alongside this, Microsoft expects us to know about \u0026ldquo;shared features\u0026rdquo; with other Microsoft products, such as Power BI and Microsoft Teams. This section of the exam appears to be directly copied and pasted from the MB-910 spec and has such a low weighting that I\u0026rsquo;m unsure what detailed knowledge (if any) you will need to know regarding specific integration points. My high-level thoughts on this exam are a little bit uncertain. I suspect we could be witnessing a real-life experiment to see whether candidates can engage and grasp the various ERP solutions from Microsoft as part of a single learning cycle and exam sitting.\nConclusions or Wot I Think A change to the MB-901 exam, either in terms of contents or in replacing it entirely, was always inevitable. The audience for the exam was almost non-existent in real-life terms, given that people naturally drift to either the CRM or ERP side of Dynamics 365 and do not generally venture much into \u0026ldquo;the other side\u0026rdquo;. Attempting to cram all of this within a single exam also made it impossible for professionals to get a comprehensive look over the garden fence and, most importantly, fully understand the usage case and standard features within each Dynamics 365 application. I\u0026rsquo;m hoping that this change will help to address these concerns and, consequently, reduce the barrier of entry for those interested in focusing their career towards delivering these solutions. Both exams will be out in beta on or around February 2021, and I\u0026rsquo;ll almost certainly be checking out the MB-920 exam when it lands - so I can ensure I know what each of the various ERP Dynamics 365 solutions do.\nWhat do you think about these latest exams? Will you be taking them yourself next year? Let me know in the comments below!\n","date":"2020-12-20T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/new-dynamics-365-fundamentals-certifications-overview-welcome-back-crm-erp/","title":"New Dynamics 365 Fundamentals Certifications Overview: Welcome Back CRM \u0026 ERP!"},{"content":"Both the advantage and disadvantage of working in the Microsoft Business Applications space is the massive variety of different, available tools at our disposal that we can leverage to solve common business issues. However, unless you\u0026rsquo;ve spent some time researching or getting shown new subjects from someone \u0026ldquo;in the know\u0026rdquo;, you often avoid stepping outside of your comfort zone. It was great, therefore, to recently join a session hosted by the excellent people over at the Virtual Power Group all about Adaptive Cards, a modern tool for allowing open exchanges of information between a variety of different systems. And, as luck would have it, I\u0026rsquo;ve been involved in a project recently where there was an excellent opportunity to put them into action. So I rolled up my sleeve, took them for a test drive and wanted to share how I overcame what turned out to be less than straightforward requirement\u0026hellip;\nThe overall need for the solution was to display a list of variable data generated from a system, within a table, that a user could then consume from within Outlook. The underlying data was essentially a key/value pair of documents, and their appropriate URL\u0026rsquo;s derived from SharePoint Online. This information needed to be sent out via a Power Automate flow and resemble the format indicated below:\nSounds easy, right? 😅 Well, it is sort of, but not as simple as I might have hoped when starting. To begin with, we need to initialise a couple of array variables, which are used to store the relevant JSON snippets to store the data:\nNext, because of how ColumnSets work in Adaptive Cards, we need to populate the first row in each of our column with the header values for our dataset. In this case, as well, we want to make it extra clear that this is the header row, by ensuring that the column values we supply render in bold. Therefore, we need to call two Append to Variable Array tasks to get this set up correctly:\nHere\u0026rsquo;s a copy/pastable version of this snippet below - change the text value to suit your specific scenario:\n{ \u0026#34;type\u0026#34;: \u0026#34;TextBlock\u0026#34;, \u0026#34;text\u0026#34;: \u0026#34;Document Type\u0026#34;, \u0026#34;weight\u0026#34;: \u0026#34;bolder\u0026#34; } Next, we need to actually to get the data that we want to display. In this situation, the data is being pulled from Microsoft Dataverse (AKA the Common Data Service), and the Document Type value is an Option Set. We also want to make sure that the URL\u0026rsquo;s are embedded as hyperlinks so that people can easily navigate to the document in question. So this will get interesting. 😁 Once our data has been retrieved, we then need to implement an Apply to each and, to begin with, the two compose steps we need that will extract the correct values from each row returned:\nAgain, here\u0026rsquo;s the raw JSON snippets. We are using a bit of Markdown magic to ensure the hyperlink generates correctly; you can modify the text in the square brackets to change the text of the resulting link:\n{ \u0026#34;type\u0026#34;: \u0026#34;TextBlock\u0026#34;, \u0026#34;text\u0026#34;: \u0026#34;@{items(\u0026#39;Apply_to_each\u0026#39;)?[\u0026#39;jjg_doctype@OData.Community.Display.V1.FormattedValue\u0026#39;]}\u0026#34; } { \u0026#34;type\u0026#34;: \u0026#34;TextBlock\u0026#34;, \u0026#34;text\u0026#34;: \u0026#34;[@{items(\u0026#39;Apply_to_each\u0026#39;)?[\u0026#39;jjg_url\u0026#39;]}](@{items(\u0026#39;Apply_to_each\u0026#39;)?[\u0026#39;jjg_url\u0026#39;]})\u0026#34; } Then we do another append step again, which will get each row from our dataset added on as we\u0026rsquo;d expect, by referencing the output of the previous two compose actions:\nNow we can bring it all together for a final compose action, which will inject in the dynamic values we\u0026rsquo;ve specified above. Here\u0026rsquo;s the final snippet to use to get this built out:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;http://adaptivecards.io/schemas/adaptive-card.json\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;AdaptiveCard\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.3\u0026#34;, \u0026#34;body\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Container\u0026#34;, \u0026#34;items\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;TextBlock\u0026#34;, \u0026#34;text\u0026#34;: \u0026#34;Sample Adaptive Card\u0026#34;, \u0026#34;weight\u0026#34;: \u0026#34;Bolder\u0026#34;, \u0026#34;size\u0026#34;: \u0026#34;Medium\u0026#34; } ] }, { \u0026#34;type\u0026#34;: \u0026#34;Container\u0026#34;, \u0026#34;items\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;TextBlock\u0026#34;, \u0026#34;text\u0026#34;: \u0026#34;This is an example adaptive card displaying dynamic tabular data.\u0026#34;, \u0026#34;wrap\u0026#34;: true } ] }, { \u0026#34;type\u0026#34;: \u0026#34;ColumnSet\u0026#34;, \u0026#34;columns\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Column\u0026#34;, \u0026#34;width\u0026#34;: \u0026#34;200px\u0026#34;, \u0026#34;items\u0026#34;: @{variables(\u0026#39;DocumentTypeJSON\u0026#39;)} }, { \u0026#34;type\u0026#34;: \u0026#34;Column\u0026#34;, \u0026#34;width\u0026#34;: \u0026#34;stretch\u0026#34;, \u0026#34;items\u0026#34;: @{variables(\u0026#39;URLJSON\u0026#39;)} } ] } ], \u0026#34;actions\u0026#34;: [] } From there, you can then implement the appropriate step to distribute your Adaptive Card - whether through Microsoft Teams, Outlook or elsewhere.\nAdaptive Cards provide some impressive capability that can sit neatly alongside the Power Platform, and I\u0026rsquo;ve been kicking myself for not taking a proper look at them until now. Although to be fair, you do need to identify a particular type of business requirement to leverage them to their fullest capability. I recommend you check out the recordings of all the Virtual Power Group\u0026rsquo;s online sessions on Adaptive Cards, so you can hopefully discover their benefits and how easy it is to get started with them. I\u0026rsquo;ve included the links below to all the sessions:\nAdaptive Cards: An Introduction \u0026amp; MS Teams Adaptive Cards: Actionable Messages in Outlook Adaptive Cards: A Developer\u0026rsquo;s Perspective ","date":"2020-12-13T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/using-power-automate-to-generate-dynamic-tabular-data-within-adaptive-cards/","title":"Using Power Automate to Generate Dynamic Tabular Data Within Adaptive Cards"},{"content":"We all like a good deal at the end of the day, regardless of what we are purchasing. So it was great to hear about a new, limited-time pricing offer from Microsoft concerning Power Apps this week, which provides some significant discounts to the standard, Microsoft direct list prices. Here\u0026rsquo;s a rundown of what\u0026rsquo;s on offer:\nPer app plan: Purpose: Allows organisations to license individual apps to groups of users, while also having access to capabilities such as Microsoft Dataverse and Power Automate. Normal Price: $10/user/app/month Discounted Price: $3/user/app/month Per user plan: Purpose: For situations where organisations are running many different apps, and need a streamlined license that provides for unlimited app capacity on a tenant and enhanced storage options within Microsoft Dataverse Normal Price: $40/user/month Discounted Price: $12/user/month That represents a whopping seventy percent (70%) discount - Christmas has come early it seems! Now, before you get too excited, be sure to keep in mind the following terms \u0026amp; conditions with the offer:\nThe offer will remain valid between December 1st through to June 2021. Only customers buying licenses via a volume licensing or Cloud Solutions Provider (CSP) partner will be able to take advantage of the reduced price. Both offers are subject to a minimum purchase. For the per-app plan, customers must purchase 200 licenses; for the per-user plan, 5,000 licenses must be ordered. For volume licensing customers on Enterprise Agreements, this represents an excellent opportunity for you to secure licenses for one of today\u0026rsquo;s premier low-code, business application development platforms and to hold these prices over several years. Unfortunately, CSP agreements typically last for a maximum of 1 year, so it won\u0026rsquo;t be possible to secure this discounted rate for longer than this if you\u0026rsquo;re transacting directly with a partner. Regardless, there is still a great opportunity here for your organisation to save money and take advantage of some of the various capabilities within the Power Platform.\nHave any questions about this offer? Please let me know in the comments below. 😀\n","date":"2020-12-06T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/announcing-a-new-limited-time-power-apps-licensing-offer-for-2020-2021/","title":"Announcing a New Limited Time Power Apps Licensing Offer for 2020/2021!"},{"content":"I\u0026rsquo;d like to consider myself as being somewhat knowledgeable when it comes to Microsoft SQL Server. However, when recently studying for the DP-200: Implementing an Azure Data Solution exam, I did find myself doubting my abilities. The cause of this? The topic of Azure Synapse Analytics (formerly known as Azure SQL Data Warehouse). This product introduces lots of new concepts, features and capabilities that you feel you should know already. It\u0026rsquo;s very similar to SQL Server and uses the same T-SQL query language; therefore, it can\u0026rsquo;t be that different, right? 😅 Well, it turns out, the underlying architecture of the platform, alongside several core capabilities, differ vastly from what you would expect traditional SQL Server to support, and even it\u0026rsquo;s corresponding cloud offering, Azure SQL Database. The main challenge I found was understanding how data distribution works on the platform, and also on the most suitable index type to configure for a table, depending on its usage scenario. So what I thought I\u0026rsquo;d do in today\u0026rsquo;s blog is discuss these topics in detail, and if, like me, you found yourself struggling to understand them, hopefully, this post will help you to gain a better understanding of these core concepts.\nUnderstanding Table Distribution When working with a traditional SQL Server database, we anticipate our database will reside within a file and, in most cases, stored on a single disk drive. Azure Synapse Analytics adopts a far more complex, but arguably scalable and cost-efficient, means of storing your data. As noted in this Microsoft Docs article, Azure Storage is utilised in a distributed manner, to hold your data securely. When you connect to an Azure Synapse Analytics service via SSMS, it\u0026rsquo;s not a SQL Server database engine in the traditional sense. Instead, you\u0026rsquo;re hooking yourself up to a control node, which then leverages a compute node and Data Movement Service (DMS) to process your queries, find your data and return this in the format you\u0026rsquo;ve requested. How the platform distributes your data is dictated by you when you build your table out for the first time. So the best way to understand this is to take a look at an example. The query below resembles a traditional CREATE TABLE script, with an additional modification in the second portion of the code snippet. Can you spot what it is?\nCREATE TABLE MySchema.MySynapseAnalyticsTable ( MyColumn1 NVARCHAR(250) NOT NULL, MyColumn2 MONEY NULL, MyColumn3 INT NULL ) WITH ( DISTRIBUTION = HASH(MyColumn1), CLUSTERED COLUMNSTORE INDEX ); GO The first line within the WITH options section is what controls table distribution, and we have three options to choose from:\nHash: In this option, the platform assigns each row in the table to its own distribution set, with a corresponding column set as the distribution column. As you add new rows to the table, Synapse Analytics evaluates the value within the distribution column and, if a distribution for this exists, then it is assigned to that; otherwise, a brand new one gets created. Distributions of this type will typically be most effective when you need to perform joins or aggregate queries targeting the distribution column, and for situations where you\u0026rsquo;re working with large fact/dimension tables that are over 2GB in size. Round Robin: Data within tables distributed in this manner are done so evenly, with no control over how the initial distribution set is defined. Queries involving joins that target tables of this type will typically suffer from poor performance. You should only really use this distribution type for temporary tables or staging tables. Replicated: Designed for very small, (ideally) dimension tables, that are less than 2GB size, rows in tables of these types are copied out to every compute node. As such, you can expect your storage space and costs to balloon if you\u0026rsquo;re not too careful with this distribution type. The performance of your queries also suffers over time, given that writes to the table will need to occur across multiple compute nodes simultaneously. So in the example above, we are specifying that the table should use the Hash distribution method, with MyColumn1 set as the distribution column. We would anticipate the value of this column to be used as part of joins further down the road.\nIndexes: What\u0026rsquo;s available and what\u0026rsquo;s best for your particular needs Returning to the example query above yet again, you will also notice that we are supplying a second option as part of our WITH options; in this case, CLUSTERED COLUMNSTORE INDEX. Here we are instructing the table on which index type to leverage and, similar to our distribution types, we have three options available:\nClustered/Nonclustered: For those coming from a SQL Server background, these are perhaps the most self-explanatory index types on offer. They are designed for situations where you may need to return a single or very few rows in the table. Often, these will be the types of queries that your front-end reporting application will be running most regularly. Given their precise nature, they will not be suitable when you anticipate all manner of different queries hitting your tables. Clustered Columnstore: Synapse Analytics creates this as the default index when you don\u0026rsquo;t specify an option for your table and are generally the first option you should consider when you are unsure what\u0026rsquo;s needed. As well as providing a high level of compression and overall best query performance, they are also well suited for larger tables as well. However, they don\u0026rsquo;t support any tables that contain maximum length string/binary columns (NVARCHAR(MAX) etc.). Also, for tables with fewer than 60 million rows, you may notice that you are not getting the best overall compression for your data. In this situation\u0026hellip; Heap: \u0026hellip;consider using this index type. Tables configured with this will typically benefit from faster load times. As such, they are the best type of index to use for temporary or staging tables. So, at this stage, it should start to become apparent how you can best mix \u0026amp; match distribution and index types, to achieve the best overall performance for your applications. Round robin tables with a heap index will be your best choice for any table that is used to load or hold data temporarily. For a table smaller than 2GB that are targeting precise, laser-focused queries, a replicated table with the required clustered/nonclustered indexes will be best. Flipping this on this head, finally, consider a hash table with a clustered columnstore when you have a huge table, and you want to ensure Synapse Analytics compresses your data most effectively.\nAlthough your particular needs may vary when it comes to the Azure Synapse Analytics, this should hopefully cover most scenarios and give you a flavour of what will work best for your needs. I could spend many hours and weeks with Azure Synapse Analytics, as it looks to be a competent product, that combines the old and new in some pretty exciting ways. If you have any questions about Azure Synapse Analytics at all, let me know in the comments below. 😀\n","date":"2020-11-29T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/understanding-table-distribution-index-types-in-azure-synapse-analytics/","title":"Understanding Table Distribution \u0026 Index Types in Azure Synapse Analytics"},{"content":"I had an enjoyable time at the D365UG UK Cambridge event earlier this week. During this, we saw some great sessions from the likes of Ana Demeny, Andrew Bibby and Feridun Kadir, covering a variety of useful Dynamics 365 and Power Platform related topics. I was also fortunate to present a session myself, all about Environment Variables. The talk linked closely to a blog post I did a couple of weeks back, and I had an excellent question come up regarding the approach I used to retrieve the Environment Variable value within a Power Automate flow. If you recall from the post, I was using a somewhat convoluted method to retrieve the actual Environment Variable value I wanted via multiple List record and Apply to each steps, which I\u0026rsquo;ve illustrated in the screenshot below:\nFirst, let\u0026rsquo;s highlight why going down this approach may be less than ideal:\nAn Apply to each step does introduce a performance overhead into your flows. Your mileage may vary, but in the case of the above flow, there was an additional second\u0026rsquo;s worth of execution time introduce into the equation. For much larger flows, this approach could extend this delay even further. The above Flow does not make it immediately clear that all we are interested in retrieving the first value from our list records step. You would have to add on a specific comment to make this apparent. Thankfully, there is a better way - and for new readers or for those who have jumped across from the previous post (hello, by the way 🙂 ), we\u0026rsquo;ll now show you how to do this.\nTo begin with, we want to expand out our List ____ Environment Variable Current Value action step and press the cross icon over the Environment Variable Definition field that\u0026rsquo;s referenced in there:\nWith that removed, we can now drag this action step and place it directly after the existing List ____ Environment Variable action:\nNow, we can look to replace the dynamic content field with a custom formula to return the attribute of the first record from the previous step. The syntax to use for this is as follows:\nbody()?[\u0026lsquo;value\u0026rsquo;]?[0]?.environmentvariabledefinitionid\nSo if we\u0026rsquo;ve called our List ____ Environment Variable action step List MyVariable Environment Variable, the formula would need to look like this:\nbody(\u0026lsquo;List_MyVariable_Environment_Variable\u0026rsquo;)?[\u0026lsquo;value\u0026rsquo;]?[0]?.environmentvariabledefinitionid\nDon\u0026rsquo;t forget the single quotes! If done correctly, your step should now resemble the below:\nNext, we need to perform the same steps for our Set Variable action step; namely, remove the dynamic content, drag it so that the action succeeds from our List ____ Environment Variable Current Value and add in a new formula. This should do the same as above, but referencing the field we require from the Environment Variable Value table:\nbody()?[\u0026lsquo;value\u0026rsquo;]?[0]?.value\ne.g.\nbody(\u0026lsquo;List_MyVariable_Environment_Variable_Current_Value\u0026rsquo;)?[\u0026lsquo;value\u0026rsquo;]?[0]?.value\nIt should look something like this if done correctly:\nWith all that done and dusted, you can then remove the redundant Apply to each steps from the flow and save it, to confirm that all changes are applied successfully.\nAll in all, with this alternative route, we\u0026rsquo;ve been able to reduce the number of steps in the flow, simplifying its structure in the process, and improve it\u0026rsquo;s performance too; the 1-second delay spoken about earlier vanishes entirely as a consequence of making these changes. Many thanks to the Cambridge Dynamics 365 community for raising this as a question, and I hope this post provides enough detail on how to adapt it yourself within your environment. 😃\n","date":"2020-11-22T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/environment-variables-in-microsoft-power-automate-revisited/","title":"Environment Variables in Microsoft Power Automate Revisited"},{"content":"This week, we saw what I feel is the most innocuous, yet far-reaching change that Microsoft has made to Dynamics 365 Online and the Common Data Service (CDS). And I have to say; I don\u0026rsquo;t think I\u0026rsquo;m that conflicted over the changes.\nFirst, let\u0026rsquo;s backtrack and be clear exactly what we\u0026rsquo;re talking about; namely, a buried away Microsoft Docs article change, relating to the Common Data Service, that informs us of some significant terminology changes:\nResponding to customer feedback and data from user research, effective November 2020 we\u0026rsquo;re updating some terminology in Common Data Service to be more intuitive and make its usage more productive. The terminology updates are listed below, and we\u0026rsquo;re in the process of rolling them out across Microsoft Power Platform.\nLegacy term Current term Entity, entities Table, tables Field, fields Attribute, attributes Column, columns Record, records Row, rows Option set, multi select option sets Picklist, picklists Choice, choices Two Options Yes/No The changes made here break with the everyday vocabulary of a system that many have worked with for well over ten years. You can\u0026rsquo;t teach an old dog new tricks they say, and indeed, for those long in the tooth with Dynamics CRM, Dynamics 365 and the Power Platform, this change may become a bitter pill to swallow. There\u0026rsquo;s already been discussion regarding this change within the community, such as by Daniel Cai and Alex Shlega. Outside of this, the general feel I\u0026rsquo;ve gotten from social media is that most people are receiving these changes somewhat negatively. Before I start wading into this debate myself, let\u0026rsquo;s take some time to review the background, to help understand the mindset and potential direction of travel regarding this change.\nDon\u0026rsquo;t Forget what the Common Data Service is: A SQL Database Those who are relatively new with working with the Common Data Service may not be aware of this. Still, it\u0026rsquo;s worth highlighting that the Common Data Service, in it\u0026rsquo;s basest form, is just a managed Azure SQL Server database, with a graphical interface that allows us to create the appropriate tables, columns and views in our backend database. In this context, the alignment of these terms to what we\u0026rsquo;d refer to them as in the underlying database system is pretty logical. It also allows us to refer to objects in a way that both Power Platform and SQL developers will readily understand.\nPoor implementation of a concept does not make the concept itself a bad idea These changes took me by surprise - I think I saw them applied within the Power Apps maker portal first before reading any of the social media posts on the subject. It also all feels very sudden. Instead of being applied on X date moving forward, the change happens now; no ifs, buts and - most importantly - an opportunity to discuss the change. And this is where I think most of the negative feedback stems from. If you are taking people on a journey with a new concept, you have to ensure they have an active role and that you give sufficient notice for people to react and respond accordingly. In the manner through which Microsoft has rolled out this change, I suspect a lot of people may be finding themselves scrabbling around to update documentation, refresh screenshots and explain to perturbed colleagues why something has suddenly changed, with no notice. Hence, frustration at having to do a lot of unplanned work manifests itself accordingly.\nUser Feedback is King We must take Microsoft at their word on this one, but it would appear that the vast majority of actual users of these systems (i.e. not just people on their soapboxes, with blogs - like me 😀) seem to have a strong opinion on the legacy terms. It\u0026rsquo;s unlikely that I, or anyone else outside of Microsoft, will have sight of this feedback. But there must have been a majority of opinion for such impactful changes to have been authorised.\nTempting Microsoft Access Users to the Power Platform Daniel Cai makes the following, great analysis within his blog post, which I feel is worth focusing on in more detail:\nI have a suspicion, that the Microsoft Power Platform team is putting their every effort in making the CDS platform to work like the Access application. I have to admit, Access was really popular at its own time, and it had its great time, more importantly it covers pretty much all above new terminologies. I wonder why don\u0026rsquo;t the Power Platform team simply revive the Access application and polish it with some modern look and add some cloud fantasy, and announce it as a huge product renovation and call it a success. I might have gone a little wild on this, but I definitely see some coherence or at least some connections there.\nThe continued existence of Microsoft Access, to run your core business applications, feels very much like heresy these days. I have long seen the Common Data Service and Power Apps as the logical evolution of Microsoft Access and as a potential mechanism for people to migrate across their \u0026ldquo;legacy\u0026rdquo; applications into the cloud. I think Microsoft Access should have been deprecated years ago; however, I suspect that it still forms a core part of many peoples businesses today. When we start thinking about the terminology changes in this context, we could argue that Microsoft is attempting to align the Common Data Service better so that existing Access developers can more easily embrace the Power Platform moving forward. If this turns out to be one of the end goals - an eventual goodbye to Microsoft Access and alignment of the Power Platform as the successor product to migrate to - then a terminology change is a small price to pay to help achieve this.\nThe Joys of Being a Secret SQL Fanboy I\u0026rsquo;ve worked far longer with SQL Server than I have with Dynamics CRM, 365 or the Common Data Service. And I will confess - I\u0026rsquo;m a true SQL geek, through and through. Nothing pleases me more than cuddling up with a hot drink and a detailed work item, that allows me to go off and build all sorts of lovely SQL Server tables, views and stored procedures. 🤓 So while I do get that the name change feels unnatural, having being used to calling an Entity an Entity for so long myself, my confliction on this subject does have some logic behind it.\nSo time to pin my colours to the mast\u0026hellip; \u0026hellip;and say that I have no significant issues with any of these changes, bar one. Here are my reasons why:\nTable: As stated already, this is what we call the object in our backend SQL database. It accurately pictures, in my mind and many others, what the object is and how the platform utilises it. Columns: See above - this is a common term in the database developers lexicon. Rows: I\u0026rsquo;m 50/50 on this one. I would refer to a single SQL Server Table and CDS Entity row as a record, interchangeably. I also can\u0026rsquo;t see myself using this in common parlance either (e.g. \u0026ldquo;I\u0026rsquo;m just going to bring up that customer row\u0026rdquo;). However, it is an acceptable, technical term to use in the context of working with database systems. So a tough call either way and impossible to use a single word to please everyone. Choice: There are no related technical terms from a SQL standpoint that we can reference here. The closest we have is perhaps table constraints, but using this to describe option sets sounds wrong. I think there may have been some compromise, therefore, when agreeing on this, but the new name does accurately reflect what the field does. The change I do not like (and I\u0026rsquo;m not just saying this prove I\u0026rsquo;m not a total fanboy 😛) is the renaming of Two Option fields to just Yes/No. Fields of this type can be of any display value, so it potentially presents a misleading impression of what this feature can do. A more sensible choice would have been Boolean or perhaps True/False instead. People working in IT will readily understand both of these naming conventions, and they more accurately reflect how SQL Server handles this type of data.\nSo there we go - I hope you\u0026rsquo;ve found this viewpoint interesting. I\u0026rsquo;ve been through my fair share of changes when it comes to the Power Platform and Dynamics 365 over the years, so perhaps I\u0026rsquo;m now used to when things get upended every year or so. 🤣 I\u0026rsquo;d be interested in hearing others thoughts on the naming change, and whether you think it was a good or a bad idea. Answers in a postcard below!\n","date":"2020-11-15T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/my-thoughts-on-the-common-data-service-terminology-changes/","title":"My Thoughts on the Common Data Service Terminology Changes"},{"content":"All things must come to an end - for good or for ill. What we term as the legacy web client interface within Dynamics 365 Online is no exception to this rule. As announced by Microsoft around this time last year, this experience is now deprecated and superseded by what we refer to as the Unified Interface (UI). When we consider the core features on offer as part of the UI - a modern experience, tailored for use across multiple device types and enhanced for performance on modern browsers - we really must see this as a positive change. However, it does come with its own set of challenges, particularly around user training and in verifying that your existing customizations to the system continue to work in the new state of play.\nMany organisations should already have succeeded in getting their systems transitioned across to the UI. However, if you\u0026rsquo;ve not transitioned across yet, then you may have recently received a communication resembling the below:\nWithout wishing to generate any panic, I must stress that the time is NOW to finish your transition across to the UI, should you find yourself in the same boat. As noted in the above, Microsoft will automatically disable the legacy interface on or around December 4th this year, thereby leaving with little or no control over this transition.\nSo how can I get moved across to the new experience? The above email indicates the three options at your disposal, namely:\nGetting Microsoft to enable the new experience for you at a date of your choosing, via the Unified Interface Scheduling Portal. As noted, you will need to have the required administrative privileges over your targeted environment(s) to get this scheduled in. Manually transition across yourself, by enabling the appropriate option in each of your environment(s). Let Microsoft automatically transition your environment on a date of their choosing, on or after December 4th 2020. For perhaps obvious reasons, I would discourage any organisation from choosing Option 3. You have no practical way of determining the date/time on which Microsoft will transition you across, making it virtually impossible to manage this change effectively. So that leaves us with Options 1 and 2 as the sensible routes to consider. I\u0026rsquo;ve had trouble working with the first option and find it to be like using a sledgehammer to crack a walnut, especially in the context of Option 2 being made available to us. As outlined in this article, we can very easily use the Power Platform Admin Center to \u0026ldquo;flick a switch\u0026rdquo; and get the new experience enabled across the board within the Product -\u0026gt; Behaviour area of the settings page:\nThen, as if by magic, the platform will automatically move everything over. Nice and easy, right? 🙂 What you will notice though is that the default app remains in the system - the application will attempt to render this using the UI, but you will receive an error similar to the below when working with it:\nMy advice is to look to recreating your existing customisations as a brand new model-driven app (as the error indicates) or customise another, relevant app to expose the functionality you need. You should also look to disable the Show legacy app to everyone, not just admins setting, to make sure people are not using it by accident.\nTransition Advice If you are at the late stage of moving across to the UI and are struggling to identify what components (if any) need reviewing as part of your migration, then here\u0026rsquo;s my list of things to consider:\nAnything involving custom code: In particular, ensure you have tested all JavaScript form functions. Be aware also that you may need to refactor some or all aspects of your code to start utilising things such as the form context as opposed to the Xrm.Page object. Components not yet available in the Unified Interface: As with any transition, there are still certain features that are not available within the UI. Microsoft has published a full list of the affected components, and you have the option of enabling what\u0026rsquo;s called the hybrid experience to ensure you can continue working with these for the immediate future. Entities that are read-only in the Unified Interface: Certain entities will, for the foreseeable future, remain read-only in the UI, according to this article from Microsoft. To work with these components, in most cases, you will have to navigate into the classic interface by using the Advanced Settings button from within a model-driven app. Thankfully, a lot of these entities are deprecated, so the impact for most deployments should be minimal; however, if you still find yourself using any deprecated functionality, take steps now to migrate away from this. Dialogs: If you are still working through the backlog of Dialogs your organisation is using, as a result of this separate deprecation notice, then it\u0026rsquo;s vital to note that you will be unable to access these by default following the transition across to the UI. Thankfully there is a work-around - the lovely people over at TKDialogs / Data8 have released a free solution that will let you continually use your dialogs like normal until you\u0026rsquo;ve migrated away from them. A great work-around that will hopefully buy you any time you might need at this late juncture. 😅 This list provides just a flavour of the types of things that I\u0026rsquo;ve seen and had experience working around as part of this transition. Microsoft has published a full checklist article, that you can use as a template to work through for your organisation - it\u0026rsquo;s worth thorough digestion. 😉\nChange is never easy and - as it may sometimes feel like with Dynamics 365 online - it can sometimes be challenging to keep on top of all the latest features, changes and feature deprecation notices that are published. Thankfully, in the case of the UI transition, Microsoft has done a pretty good job providing notice and in allowing customers to pull together their appropriate plans to transition across. If, however, you do find yourself caught out by this blog post, then I\u0026rsquo;d still say you have plenty of time to conduct any required testing and development work this month to ensure you have moved across smoothly. Do reach out if you have any questions about this transition or need a helping hand to get you moved across successfully.\n","date":"2020-11-08T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-online-unified-interface-transition-take-action-now/","title":"Dynamics 365 Online Unified Interface Transition: Take Action NOW!"},{"content":"Continuous integration and build automation continue to remain the core tenants of a successful DevOps and Application Lifecycle Management (ALM) process. Regardless of the type of software system you are working with, you should always make reasonable endeavours within both of these areas, so that you can meet the following objectives:\nStore all software artefacts within a shared code repository, that provides a full history of all changes and the ability to determine a last, known good configuration or version. Provide the mechanism to store and then quickly deploy compiled code artefacts into any target environment. Automate all aspects of the previous steps, and more besides, to reduce the amount of human intervention required. Thankfully, although a lot of this may sound tricky to implement on the face of it, we have a plethora of tools at our disposal to help speed us along. Azure Pipelines is a competent tool in this regard and, via the use of YAML build definitions, we can achieve the above objectives and more. For example, using YAML, we can very quickly put together a library of deployment templates that are suitable for use across multiple projects, and subject to versioning/change control too. In today\u0026rsquo;s post, we\u0026rsquo;ll see how we can use YAML to automate the extraction of Dynamics 365 Online / Power App Solution files.\nFor those with longstanding experience working in the Microsoft Business Applications space, Solutions have been the mainstay mechanism for defining, controlling and migrating bespoke customisations to our Common Data Service environments. Consisting of a .ZIP file, containing a multitude of different components (such as XML, DLL\u0026rsquo;s, JavaScript, image files and more), we can best think of them of as a complete \u0026ldquo;bundle\u0026rdquo; of all the changes we wish to apply into a given environment. However, in most cases, the extraction and deployment of these solution files have traditionally required manual intervention to achieve, unless you were an experienced coder with a few hours to spare. Also, the solution .ZIP file in its base form is impractical to store from a repository standpoint. Although we\u0026rsquo;ll be able to track whenever our pipeline generates a new solution file, we have no visibility over what has changed underneath the hood. Again, to do this, we\u0026rsquo;d need to look at some bespoke mechanism to extract out all of the raw components into a logical, accessible folder/file structure. Altogether, then, it has previously been rather tricky to set up the kind of DevOps/ALM solution that I indicated at the start of this post.\nWe can be thankful, therefore, that we live in more enlightened times these days; and, in particular, that we have some effective tools at our disposal to help us build something quickly, without having to resort to writing much custom code. Using the example YAML definition below and, courtesy of the Power Platform Build Tools, we can perform a daily extract of a solution file at 8 PM each day, expand out its contents and then push all changes into a chosen Git branch:\nname: $(TeamProject)_$(BuildDefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r) trigger: none schedules: - cron: \u0026#34;0 20 * * *\u0026#34; displayName: Daily Build branches: include: - MyArea/MyBranch always: true jobs: - job: ExtractMySolution pool: vmImage: \u0026#39;windows-latest\u0026#39; steps: - task: PowerPlatformToolInstaller@0 inputs: DefaultVersion: true - task: PowerPlatformSetSolutionVersion@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;MySolution\u0026#39; SolutionVersionNumber: \u0026#39;1.0.0.$(Build.BuildID)\u0026#39; - task: PowerPlatformExportSolution@0 inputs: authenticationType: \u0026#39;PowerPlatformEnvironment\u0026#39; PowerPlatformEnvironment: \u0026#39;My Environment\u0026#39; SolutionName: \u0026#39;MySolution\u0026#39; SolutionOutputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\MySolution.zip\u0026#39; AsyncOperation: true MaxAsyncWaitTime: \u0026#39;60\u0026#39; - task: PowerPlatformUnpackSolution@0 inputs: SolutionInputFile: \u0026#39;$(Build.ArtifactStagingDirectory)\\MySolution.zip\u0026#39; SolutionTargetFolder: \u0026#39;$(Build.SourcesDirectory)\\JJG.MyProject\\MySolution\u0026#39; - task: CmdLine@2 inputs: script: | echo commit all changes git config user.email \u0026#34;devops@mydomain.com\u0026#34; git config user.name \u0026#34;Automatic Build\u0026#34; git checkout MyArea/MyBranch git add --all git commit -m \u0026#34;Latest solution changes.\u0026#34; echo push code to new repo git -c http.extraheader=\u0026#34;AUTHORIZATION: bearer $(System.AccessToken)\u0026#34; push origin MyArea/MyBranch Let\u0026rsquo;s talk through exactly what this YAML file is doing:\nTo begin, we must install all of the pre-requisite components needed by the Power Platform Build Tools. This is a mandatory step and will prevent any nasty errors further down the line. We then update the version of our solution, using a combination of a fixed version number and the unique build ID from Azure DevOps. Next, we perform an export of the unmanaged solution from the tenant, using an Asynchronous operation to process this. The pipeline then stores the resulting .zip file within a local directory on the build agent. Then, the pipeline unpacks the entire contents of the solution file into a new directory within the sources directory; which, in this instance, will be a direct copy of the contents of our MyArea/MyBranch branch. Finally, we run a series of Git commands via a command prompt to push all of the extracted solution contents back into our remote repository. As part of this, we define a custom user name and commit message as part of these changes. So in less than six steps, we\u0026rsquo;ve been able to extract out every single component of our solution, and automate the entire process of getting these changes back into our repository. And all without requiring a single manual step - nice!\nWhile this YAML file is relatively self-contained in terms of its functionality, there\u0026rsquo;s also a couple of things you\u0026rsquo;ll need to set up around this, to get it working as intended.\nThe branch in question (in this case, MyArea/MyBranch) will need to exist in your target repository before running the pipeline for the first time. Make sure the Project Collection Build Service account has been granted Contribute privilege over the repository you are working with. You can verify this by navigating to Project Settings -\u0026gt; Repositories, selecting the repository you are working with and ensuring that the Contribute privilege is to set to Allow**.** This account will typically contain the name of your DevOps organisation in the title. Ditto above, but this time for the Build Service account for your project. This account will typically have the naming format of Build Service (). Create a generic service connection, which will store the details of the Dynamics 365 / Common Data Service environment hosting your solution file. For this, you will need the URL of your instance and the username/password value of an account with sufficient privileges to extract solutions from the environment. Altogether then, by using the solution outlined in this post, or a variant thereof, developers no longer need to worry about manually extracting and checking in their solution changes each day. The MyArea/MyBranch branch can then remain open for all incoming changes, which we can then push where they need to go as part of a Pull Request further down the line. And, finally, we can assure the business that we are meeting the three objectives outlined at the start of this post\u0026hellip;and, hopefully, preventing some poor individual from completing repetitive, manual tasks at the end of each day too. 🙂\n","date":"2020-11-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/extracting-dynamics-365-power-apps-solutions-using-yaml-azure-devops/","title":"Extracting Dynamics 365 / Power Apps Solutions Using YAML (Azure DevOps)"},{"content":"We\u0026rsquo;ve long had the capability of tagging images onto either our Common Data Service or Dynamics 365 system or custom entity records, via a dedicated attribute type. They provide a quick and easy mechanism to associate an image with either an individual or company our organisation is working with, to give that immediate context for users in the application. The functionality can also be extended further through custom entities as well. For example, a recent requirement I dealt with involved storing the image of physical equipment items alongside a record in the application. Entity images came to the rescue here, and allowed us to satisfy the requirement\u0026hellip;but we did have some issues along the way when attempting to display these from within a canvas Power App. For example, let\u0026rsquo;s assume you are working with a form control in an app, and you want to display the image stored within a field called jjg_myentityimage via an image control type. Using the following formula will return and display this image:\nThisItem.jjg_myentityimage\nThe problem, though? Only a partial thumbnail will display, and you have no control over which portion of the image the application uses. How frustrating! Fortunately, there is a way to get around this, but we first need to delve a little deeper into how the image field type operates.\nAs noted in this Microsoft Docs article, there is some additional metadata associated with image fields, that you don\u0026rsquo;t usually see with standard attribute types in the Common Data Service. Microsoft exposes out information points such as the date/time stamp of when it was last updated, the full URL path of the image and it\u0026rsquo;s maximum file size. Also, and perhaps most useful for our current purposes, we have the option of indicating whether the attribute can store the full, original image or not. We can update this property via the SDK or Web API, and also through the new maker portal too:\nWith this property enabled and - provided that our image does not exceed the maximum size specified - the platform will be able to store it, in full. This is the first step we must complete to render our images in whole via any mechanism, including from within a canvas Power App.\nSecondly, we must adjust how our app is returning our image. As noted by the article mentioned earlier, developers can quickly grab the full image by altering their Web API call accordingly. The real question is, do we have a mechanism of accessing the same property from within Power Apps or, most crucially, without having to resort to a tool such as Power Automate instead? The answer is a resounding YES, and the solution is one that was only recently made available to us if you dig in deep to this blog post discussing multiple image attribute types within the Common Data Service. All we need to do is extend out our previous formula like so:\nThisItem.jjg_myentityimage.Full\nAnd, as if by magic, our images should start to render in full, without any annoying thumbnails appearing. In retrospect, this was such a simple fix when you think about it, but one that wasn\u0026rsquo;t immediately obvious when I was first reviewing this particular issue. Hopefully, this post will help you if you\u0026rsquo;ve found yourself in the same boat. 🙂\n","date":"2020-10-25T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/displaying-a-common-data-service-full-entity-image-in-a-canvas-power-app/","title":"Displaying a Common Data Service Full Entity Image in a Canvas Power App"},{"content":"I had such a fun time taking part in the Dynamics 365 October Wave 2 virtual event yesterday. It was great to get together with other community members and to \u0026ldquo;ride the wave\u0026rdquo; of all the new features Microsoft are releasing to the Business Application suite of products this month. In particular, I enjoyed attending the following sessions:\nMarc Trotman\u0026rsquo;s crash course presentation on the Power Platform, which showed attendees how to construct various elements of the Power Platform, including a Power Virtual Agent, a model-driven Power App and Power Automate flows. His blog is worth a follow. Antti Pajunen\u0026rsquo;s Introduction to Dynamics 365 Project Operations. Antti showcased his true fall to the ERP dark side in this talk. 🤣 Nevertheless, it was beneficial to see the complete, end-to-end story with Dynamics 365 Project Operations played out and find out more about the capabilities of the solution for new customers and existing Project Service Automation users. Antti produces amazing content in both of these areas, in both his blog and YouTube channel. Guro Faller gave us a very detailed overview of the Dynamics 365 Marketing product, highlighting some of the areas where it is now meeting or exceeding available competitor products. Guro is yet another Marketing whizz from Norway, and it was great to see some of the new capability of this product in action. Beth Burrell provided an in-depth session around what to expect when upgrading your on-premise/online Dynamics 365 instances, sharing some great pointers from her experience across multiple projects. As highlighted during the session, end-user training and early testing are essential to ensure your upgrades go as smooth as possible. And finally, Éric \u0026ldquo;Ze Power Diver\u0026rdquo; Sauvé talked us through some of the new capabilities as part of AI Builder within the Wave 2 release. Of particular interest is the latest preview capability around automated receipt scanning, which I\u0026rsquo;m sure will please those submitting expense claims regularly. Éric also has a great blog that is worth a follow, especially if you are interested in AI Builder features. A huge thank you to Tricia Sinclair, Dian Taylor, Victor Dantas and everyone involved as part of the Power Community for organising and moderating all sessions during the day. If you\u0026rsquo;ve missed out any of this content, then be sure to subscribe to the Powerthon YouTube channel, as all videos will soon be uploaded to there.\nI was fortunate to also present a session myself on the day, where I talked through and showcased some of the new capabilities coming into Power BI as part of the Wave 2 release. The whole talk will soon be viewable on the above YouTube channel, but I have also uploaded the presentation to my GitHub page as well if you are interested. If you are in a hurry, then some of the key takeaways from me as part of this release include:\nThe introduction of the TDS/SQL endpoint and DirectQuery capability when working with the Common Data Service. This will not only allow us to generate real-time reports using Power BI but also enforce the same security privileges inherited from the Common Data Service. Regrettably, the public preview for this feature was pulled by Microsoft (for more details on this, check out this excellent post from Mark Carrington), but we should hopefully see this restored shortly. The ability to call Power Automate flows from directly within a Power BI Report. With this capability added, we will be able to utilise both our canvas Power Apps and flows within the same report, thereby creating an end-to-end Power Platform solution in a single report page - nice! Although it may seem like a minor change, it is interesting to note that the on-premise gateway is now being referred to as the Power Platform Gateway as part of this release. I think this now makes the importance of this tool unquestionable if you are looking to leverage all aspects of the Power Platform alongside any on-premise resources you have. As blogged about previously, I am giddy with excitement about the new Power BI Premium Per User license offering, and the hope that this will unlock some great features at a reduced price point. You can find out more about this offer in my post covering my highlights from Microsoft Ignite 2020. If you have any questions about the Wave 2 release, then let me know in the comments below. Hope to see you all again at the next virtual event!\n","date":"2020-10-18T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-october-wave-2-event-17th-october-2020/","title":"Dynamics 365 October Wave 2 Event - 17th October 2020"},{"content":"No two software deployment is ever the same. As IT professionals, we will often need to tailor our solutions, to ensure they operate within separate environments, such as development, test, production and anything else in between. From a Power Platform / Dynamics 365 standpoint, there has been some long-standing capability here to help address some of these needs, via features such as secure/unsecure plugin configuration. However, the main limiting factor behind these has been their limited applicability; more correctly, we can only use them if we are leveraging plug-ins as part of our solution. If, for example, we wanted to tailor our deployments and utilise different values for our canvas Power Apps, Power Automate flows or any other solution, we would have to rely on a workaround, using a new custom entity or similar. Having to factor in the development time and effort to support this can be debilitating to the progress of a project.\nIt was pleasing, therefore, to see Microsoft introduce Environment variables into public preview late last year. This announcement addressed a long-standing community ask, as well as helping to expand out the Application Lifecycle Management (ALM) capabilities of the platform. And, thanks to the fact that the feature is generally available, we can now happily utilise them as part of our live, production deployments. But how do you go about setting them up? And how do you then utilise them using canvas Power Apps and Power Automate flows? Well, it\u0026rsquo;s funny that you should ask. 😀 As this so happens to be the focus of today\u0026rsquo;s blog post. Scroll on to find out more!\nCreating Environment Variables Rather than repeat what\u0026rsquo;s available out there already, I\u0026rsquo;d advise taking a look at this great Microsoft Docs article, which provides full instructions on how to create them. Suffice to say environment variables can:\nBe included as part of our solutions. Support a variety of different data types, including JSON objects. Be assigned a default value, which the platform will use if we specify no current value. Setting the Current Value Each Environment Variable must have a current value, which we can set in one of two ways:\nVia the Maker Portal: If you navigate to the Environment variable within your solution, you will have the option at the bottom of the pane to supply this value. I would recommend that if you are exporting out to another environment, that you remove this from the solution first; this will retain the value in your current system but prevent it from being automatically filled in when you import into your target environment. On Solution Import: Using the new solution import experience (which I touched upon in last week\u0026rsquo;s blog post), we get the option to set the current values when importing the solution for the very first time. In the example below, three Environment variables require specifying as part of the solution import: From there, you can look to leverage them from within your canvas apps and flows. Let\u0026rsquo;s take a look first at how to use them for your apps.\nUsing in Canvas Power Apps First, make sure that you have added the following two entities into your canvas app:\nNow, you can then use the following code snippet below to grab the current value of your chosen Environment variable. In this example, we are retrieving the current value of an environment variable with the schema name jjg_myvariable:\nSet(myVariable, LookUp( \u0026#39;Environment Variable Values\u0026#39;, \u0026#39;Environment Variable Definition\u0026#39;.\u0026#39;Schema Name\u0026#39; = \u0026#34;jjg_myvariable\u0026#34;, Value) ); A suggested way of calling this, if your value needs to remain static across your app, is as part of the OnStart event handler. But with the above snippet, it is conceivable to call this anywhere in your app.\nUsing in Power Automate Flows Update 22/11/2020: There is a better approach towards achieving the below, which improves performance and makes your flows more readable. Please check out this follow-on post, which outlines the alternative approach in detail.\nThis time, there\u0026rsquo;s a bit more setup involved. First, we create a string variable which will store our environment variable value:\nNext, we then need to retrieve the details of the Environment Definition record, using a list records step similar to the below:\nList records will always return multiple results but, in this circumstance, there should only ever be one Environment variable definition record that comes back. Also, we have no easy way of knowing what the unique ID of this is, so running a query based on its schema name is the most streamlined solution on offer. Taken this into account, therefore, we must then wrap around our next step - getting the current Environment variable value - within an Apply to each action step:\nIn this case, within our filter query, we are supplying the ID of the Environment variable definition record from our previous step.\nNext, and finally, we use another Apply to each to iterate through our result set and assign the Environment variable value to the variable declared earlier. Once again, you should only ever have one record return here, so there\u0026rsquo;s little risk of us setting an incorrect value here:\nFrom there, you can then build out your flow and utilise your environment variable value in any way you see fit.\nEnvironment variables provide a useful and, perhaps, long-absent capability to help us tailor our Power Apps / Dynamics 365 online deployments. They are also sufficiently \u0026ldquo;modern\u0026rdquo;, in the sense that we can use them straightforwardly across a variety of tools within the Power Platform. I hope this post has been useful in explaining what they can do and, more importantly, how you can start using them as part of your next project.\n","date":"2020-10-11T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/working-with-environment-variables-in-canvas-power-apps-and-power-automate-flows/","title":"Working with Environment Variables in Canvas Power Apps and Power Automate Flows"},{"content":"When working with Power Automate flows or canvas Power Apps within the confines of solutions, we have typically been faced with a few issues whenever we wanted to start factoring in more complex scenarios. For example, suppose our flows/apps leverage the Common Data Service (current environment) connector only. In this scenario, we\u0026rsquo;d have little difficulty moving these components between environments via solutions and, thus, have a proper Application Lifecycle Management (ALM) process in place. However, where things begin to fall over is when we start to introduce alternate connectors into the equation - such as, for example, SQL Server, Office 365 Outlook or Send Mail. Everything would appear to export a-OK, but we\u0026rsquo;d then start to see nasty errors like this one crop up:\nIssues like this made it almost impossible for developers to neatly move specific Power Platform components across environments in a simple and, more crucially, automated manner. Instead, we had to fall back on an alternate import/export package option, which achieves our basic needs, but with some major degradation in functionality.\nFortunately, Microsoft has been listening to many of the concerns highlighted by the above and, as announced in a recent blog post, they\u0026rsquo;ve made some new capability available (in preview) to help work around the above errors. Specifically, we now have the option of being able to add-in the missing Connection Reference components indicated in the screenshot above, by selecting the appropriate option when working with our solution in the maker portal:\nHowever, the key thing to remember with this new feature is that we must then import the resulting solution files using the new maker portal. If we attempt to import our updated solution using the classic experience, you may then encounter issues where, for example, your Power Automate flows are switched off after import. You can resolve this by navigating to the Flow in question, editing it and specifying the correct connection profiles to use. But, frankly, this is a hassle we could all do without. 🙂 So instead, we can use the new solution import experience within the maker portal, which now has a dedicated screen to allow us to specify our connection sources on import:\nUsing the dropdown boxes, we can then select an existing connection or create a new one to use during the import. By doing this, you will then ensure the canvas apps and flows require no additional intervention post-import; everything should \u0026ldquo;just work\u0026rdquo;. As a side-note, the new experience also has a screen for entering environment variables - an incredibly handy feature you can use to alter the behaviour of your flow and apps, depending on your targeted environment.\nIn short, the new updates to the maker portal make it even more of a necessity to use this experience over the classic one. This is particularly true if you are looking to leverage the latest innovations Microsoft continually roll out, as we see the classic experience phased out gradually. Keep in mind though that the new Connections Reference functionality is still in preview and, therefore, may not be suitable for production use yet. There is also no current capability to automate this step if, for example, you are importing your solutions via an Azure DevOps Build pipeline. Hopefully, this functionality will be added in future, to ensure that the ALM process for delivering Power Platform solutions is as top-notch as possible.\n","date":"2020-10-04T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/how-to-resolve-connection-reference-missing-from-solution-errors-dynamics-365-online-power-apps/","title":"How to Resolve Connection Reference Missing From Solution Errors (Dynamics 365 Online/Power Apps)"},{"content":"It was great to have an opportunity to attend (and to get involved with) the virtual Microsoft Ignite 2020 event. Typically, large-scale Microsoft events such as these would necessitate stateside travel to get involved in. But, thanks in no small part to the ongoing COVID-19 crisis, Microsoft delivered Ignite as an entirely virtual event for the very first time. With a variety of sessions and announcements covering Microsoft 365, Azure and Power Platform, the event provides a great diving board into some of the new and ongoing capabilities that Microsoft are releasing for their cloud products. Having watched many of the sessions and trawled through the key announcements, I wanted to summarise my top 5 favourite things announced at the event. This list may be biased somewhat towards the Dynamics 365 Online and Power Platform side of things, to please excuse me if I\u0026rsquo;ve ignored a particularly great new Microsoft 365 feature. 🙂 Anyway, enough chitter chatter - let\u0026rsquo;s dive in!\nLogic Apps Having spent a lot of time recently working with them, I was keen to see what changes (if any) the Azure product team announced relating to Logic Apps. Microsoft has not disappointed in this regard and, via this detailed post from Jon Fancey, there are two particular changes worth mentioning:\nNew Workflow Designer: As indicated in this image, Microsoft will soon launch a new visual design experience for building out Logic Apps. The key touted benefits of this include providing a more modern experience and greater ease when modifying complex Logic Apps, that contain multiple steps, conditions and branches. Anything that can help to address the latter is incredibly welcome, so I\u0026rsquo;m looking forward to playing around with this as Microsoft starts to roll out the new experience. Stateless Workflows: In public preview as of now, these new types of Logic Apps will help to address scenarios where you require faster execution time for your particular workflows, but not necessarily all of the logging capability available via a traditional, stateful Logic App. The development experience for these new types of Logic Apps has been integrated rather nicely alongside Visual Studio Code, and they also leverage the updated designer experience mentioned earlier. Another essential thing to highlight with them is that they are not intended for situations where you wish to trigger your workflow, based on an action in an external system (e.g. on Create of a new record in the Common Data Service). Instead, developers must use one of the currently available built-in triggers to execute a stateless workflow. This limitation may, as a consequence, prevent them from being suitable for specific scenarios. You can find out more about them over on the Microsoft Docs site. Paying attention to what happens with Logic Apps is not just important if you are solely Azure minded. Power Platform and, in particular, Power Automate fans should be taking notes, given that Logic Apps is one of the underlying tools powering them. These two changes, in particular, could have a significant bearing on the functionality within Power Automate as we move into 2021. Almost certainly, I would anticipate the Power Automate team to gradually replace the design experience to mirror the new experience presented in Logic Apps. We may also see some additional functionality exposed to support stateless Power Automate flows, with the same capabilities listed above. All I can say for now is, watch this space. 😉\nPower Automate Desktop Speaking of Power Automate flows, it would be remiss not to mention the launch of a new solution to assist in the development of robotic process automation (RPA) flows. In this blog post from Stephen Siciliano, we learn about the new Microsoft Power Automate Desktop application, which will allow us to build out RPA flows from within Windows 10, as opposed to in-cloud. This route may be more beneficial from a design experience, depending on the type of workflow you are trying to automate. I haven\u0026rsquo;t had much opportunity to work with the RPA capabilities within Power Automate, but the stuff on offer here sounds impressive. For example, they can allow for fully attended or unattended automation of tasks such as filling out a web form, copying files from one directory to another, querying a database or even sending an email. You can get started with this functionality by downloading the public preview of the desktop application.\nPower BI Premium Per User Plans This next one, I am REALLY looking forward to. For the longest time, some of the most appealing characteristics available within Power BI online have been locked away behind the prohibitively expensive Power BI Premium offering. The most attractive feature on offer within this version of the product has (for me at least) been paginated reports. For those coming from a SQL Server Reporting Services (SSRS) background, they provide a streamlined mechanism to migrate across RDL files built using this solution and to utilise them alongside some of the more modern capabilities within Power BI. Having such a high price entry point to unlock paginated reports has been a longstanding bugbear and frustration for many of the smaller clients I work with.\nThe announcement on Tuesday, therefore, of some the new capabilities Microsoft will be releasing relating to Power BI was rounded off with a lovely cherry on the cake; namely, the introduction of a new Power Bi Premium Per User license. As outlined in the announcement blog post:\nWe are also very excited to announce Premium Per User – which provides capabilities of Power BI Premium, now on a per-user license model as a new option for customers. This addresses a key customer and community ask – to provide a lower cost entry price point to get access to Premium capabilities. Premium Per User will be available at no cost during public preview. Premium per user will be uniquely affordable and highly competitive among individual user offerings in the industry.\nIt is early days yet on all of this. We don\u0026rsquo;t even know, pricing-wise, how it will compare - all I know is that it will be \u0026ldquo;very competitive\u0026rdquo; compared with a Professional license. But, in terms of the key things we know for now, from this post and a follow-up Q\u0026amp;A article:\nPremium per user plans will include paginated reports as standard. 😁 Total size capacity for models will be 100GB. There is broad feature parity between the per user and existing capacity model. The prominent absences within the new offering include multi-geo support, access to the on-premise version of Power BI, unlimited distributed and \u0026ldquo;Bring Your Own Key\u0026rdquo; (BYOK) encryption key capability. In my books, none of these would appear to be significant limitations or degradation in functionality. So all-in-all, assuming the price point is as stated, adopting Power BI premium becomes a far easier conversation to have. The public preview for paginated reports will launch in November so, if you are interested in playing around with it yourself, sign up to the preview using this form.\nPower BI Deployment Pipelines Anyone who has spent time trying to integrate a Power BI solution alongside a Continuous Deployment / Continuous Integration (CD/CI) process will share my frustration at how complicated this process can be. Microsoft, recognising this, launched some new preview functionality a few months back, that would allow developers to manage the deployment of their solution across multiple environments efficiently. These capabilities have now gone into general availability, as announced by Nimrod Shalit on this blog post. This directly addresses a longstanding ask within the Power BI Community and sets the stage for further innovations in the months ahead. In particular and, as Nimrod notes himself, integration between tools such as Azure DevOps is currently missing; this is a feature gap Microsoft will address in future. For now, I would urge you to familiarise yourself with Power BI deployment pipelines, particularly if you are finding yourself plagued with setting up a reliable deployment process for your Power BI solution.\nMicrosoft Teams and Project Oakdale` This time around, it was evident from the outset how vital Microsoft Teams would be when it came to Ignite. As the COVID-19 crisis continues unabated, tools such as this become increasingly important for remote teams to work together in a connected way. Ensuring, therefore, that applications like Teams integrate neatly alongside the other tools we use each day and support the ability for small and large teams to collaborate is vital. Thankfully, Microsoft has recognised this reality early on and have been investing heavily In Teams since the pandemic began; culminating in a raft of new feature announcements at Ignite. Some of the Teams specific functionality announced, which I\u0026rsquo;m looking forward to, include:\nNew Together Mode capabilities that may help to address a lot of the video-call fatigue that a lot of us may be feeling at the moment. Breakout rooms for meetings, allowing organisers to set up separate \u0026ldquo;rooms\u0026rdquo; for individuals to work within, under the banner of a unified event. I can foresee this being particularly useful for conferences or training courses that will be delivered virtually for the next six months and more. This new feature should start rolling out into Teams in the next month or so. New meeting recap capabilities, which will consolidate all content from a meeting - it\u0026rsquo;s recording, transcript and any associated notes - into a single view, for easy access in the future. So no more trying to claim something was said at a meeting that wasn\u0026rsquo;t, as you\u0026rsquo;ll be found out very quickly. 🤣 Aside from these major announcements, it would be remiss of me not to mention the importance of the Power Platform alongside this. Regular followers of this blog should already be aware of Project Oakdale, a new low-code, application development tool, integrated alongside Microsoft Teams and utilising the core capabilities within the Common Data Service. At Microsoft Ignite, we were not only greeted to a full demonstration of this capability from Charles Lamanna but also given access to the public preview of this capability as well. To summarise, the functionality on offer is very much a \u0026ldquo;lite\u0026rdquo; version of the Common Data Service, with full support to upgrade into the \u0026ldquo;full\u0026rdquo; version, if you need to. So, for example, capabilities such as API support or the ability to use Project Oakdale apps outside of Teams will not be available. To get started with Project Oakdale, there a few steps that you need to follow:\nFirst, make sure that you have a qualifying Microsoft 365 Subscription and be aware of some of the general restrictions of the service. Within Microsoft Teams, install the Power Apps app. Once installed, create your first app. At this point, Microsoft Teams will provision a Project Oakdale database for you, if not already existing. Administrators can view and manage this from within the Power Platform Admin Center. Teams will then greet you with a designer experience similar to the online Power Apps Studio. From within here, you can model out your various Project Oakdale entities, by creating new entities, modifying them, adding new fields and creating relationships, all from within the Teams application. The experience here is very similar to how you would customise a Common Data Service entity, and this should present little difficulty in getting to grips with. All in all, there is some nicely integrated functionality on offer here. However, take care to avoid any potential governance issues and fully understand the impact that deploying such functionality will have when it sits alongside your existing Common Data Service deployments.\nWhat new capabilities announced at Microsoft Ignite 2020 are you most excited to start playing with? Let me know in the comments below!\n","date":"2020-09-27T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/microsoft-ignite-2020-review-my-top-5-announcements/","title":"Microsoft Ignite 2020 Review: My Top 5 Announcements"},{"content":"The Power Apps Solution Checker provides developers with a great mechanism to verify the quality of custom code targeting either Dynamics 365 Online or the Common Data Service. We can use the recommendations it produces to help write better, more optimised code, that will better benefit our deployments and the individuals working with the apps we\u0026rsquo;ve built. Typically, the recommendations it produces will be informative enough to get you on your way; Microsoft will provide a summary of the problem and a link to the relevant Docs article that contains links/code samples for you to adapt accordingly. However, this is not always the case, and you may need to scratch your head a little harder to get to the root of the problem. This happened to me recently when I was dealing with the following recommendation:\nThe Docs article link for this recommendation doesn\u0026rsquo;t give us much to go on when figuring out what to do to get our code fixed. But we must still, somehow, address the problem it raises - the idea of potentially conflicting transactions and generic SQL errors is enough, I\u0026rsquo;d warrant, to scare even the most hardened Dynamics 365 / Power Platform developer. 🙂 So let\u0026rsquo;s take a look at how I resolved the issue within this specific scenario. Hopefully, it might help you if you find yourself facing the same problem.\nAs this is a code related issues, let\u0026rsquo;s, first of all, take a look at the offending blocks of code in question. First of all, we had the following method that would call an external OData endpoint. This formed part of a Custom Data Provider for a virtual entity that would require OAuth 2.0 authentication; something that, regrettably, is not supported by the default OData provider Microsoft provides us with:\nprivate async Task\u0026lt;T\u0026gt; GetRecordById\u0026lt;T\u0026gt;(ITracingService tracer, string id, string entity, string token) { //Generate the OData endpoint request string oDataURL = $\u0026#34;{this.aadConfig.ODataURL}/{entity}({id})\u0026#34;; tracer.Trace(oDataURL); HttpRequestMessage oDataReq = new HttpRequestMessage(HttpMethod.Get, new Uri(oDataURL)); oDataReq.Headers.Authorization = new AuthenticationHeaderValue(\u0026#34;Bearer\u0026#34;, token); using (HttpClient client = HttpHelper.GetHttpClient()) { try { HttpResponseMessage oDataResponse = await client.SendAsync(oDataReq); if (!oDataResponse.IsSuccessStatusCode) { tracer.Trace(oDataResponse.StatusCode.ToString()); throw new GenericDataAccessException($\u0026#34;A problem occurred when retrieving the {entity} data.\u0026#34;); } //Retrieve data from the endpoint using the bearer token. using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(oDataResponse.Content.ReadAsStringAsync().Result))) { DataContractJsonSerializerSettings settings = new DataContractJsonSerializerSettings() { UseSimpleDictionaryFormat = true }; DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T), settings); return (T)ser.ReadObject(stream); } } catch (Exception e) { tracer.Trace(e.Message); throw new GenericDataAccessException($\u0026#34;A problem occurred when retrieving the {entity} data.\u0026#34;); } } } Note the following with this method:\nIt has been set to execute asynchronously as a task, as indicated by the async and Task keyword. The core HTTP call is wrapped within a using statement and executed as a HttpClient request. The method is generic in terms of its return type, therefore allowing us to retrieve and then cast out to a variety of different classes if required. This method was then called at relevant points elsewhere in the code using the snippet below:\nvar getEntityByIdTask = Task.Run(async () =\u0026gt; await GetRecordById\u0026lt;Entity\u0026gt;(tracer, target.Id.ToString(), \u0026#34;Entity\u0026#34;, getAccessToken.Result.access_token)); Task.WaitAll(getEntityByIdTask); var result = getEntityByIdTask.Result; tracer.Trace($\u0026#34;Entity found: {result.ID}\u0026#34;); Here, and as mentioned earlier, we are calling the asynchronous task method to return us a result set of type Entity, which represents our result from the OData endpoint.\nGoing back then to the recommendation raised earlier, we can best summarise the main issues with the \u0026ldquo;as-is\u0026rdquo; code as follows:\nThe method will always run as a separate, asynchronous call within the current database transaction. As well as being the cause of a solution checker recommendation, this could also lead to longer-running transactions and us unintentionally hitting the two-minute execution limit for sandbox, if our outbound call takes far longer to execute than expected. The HttpClient class supports asynchronous execution only; there is no way of forcing it to execute synchronously, meaning that we must instead resort to using other class types, such as HttpWebRequest and HttpWebResponse The code snippet itself is expecting the method to run asynchronously; this needs to be adjusted accordingly to match any changes to the GetRecordById method. With all this in mind, we can look to tweak our method\u0026hellip;\nprivate T GetRecordById\u0026lt;T\u0026gt;(ITracingService tracer, string id, string entity, string token) { //Generate the OData endpoint request try { string oDataURL = $\u0026#34;{this.aadConfig.ODataURL}/{entity}({id})\u0026#34;; tracer.Trace(oDataURL); HttpWebRequest req = (HttpWebRequest)WebRequest.Create(new Uri(oDataURL)); req.Method = \u0026#34;GET\u0026#34;; req.KeepAlive = false; req.Headers.Add(\u0026#34;Authorization\u0026#34;, \u0026#34;Bearer \u0026#34; + token); var res = (HttpWebResponse)req.GetResponse(); if (res.StatusCode != HttpStatusCode.OK) { tracer.Trace(res.StatusCode.ToString()); throw new GenericDataAccessException($\u0026#34;A problem occurred when retrieving the {entity} data.\u0026#34;); } using (var stream = new StreamReader(res.GetResponseStream(), Encoding.UTF8)) { DataContractJsonSerializerSettings settings = new DataContractJsonSerializerSettings { UseSimpleDictionaryFormat = true }; DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T), settings); return (T)ser.ReadObject(stream.BaseStream); } } catch (Exception e) { tracer.Trace(e.Message); throw new GenericDataAccessException($\u0026#34;A problem occurred when retrieving the {entity} data.\u0026#34;); } } \u0026hellip;and the related snippet that calls it\u0026hellip;\nvar entity = GetRecordById\u0026lt;Entity\u0026gt;(tracer, target.Id.ToString(), \u0026#34;Entity\u0026#34;, getAccessToken.Result.access_token); tracer.Trace($\u0026#34;Entity found: {entity.ID}\u0026#34;); Now, it is worth highlighting that Microsoft advises not to use the HttpWebRequest class for new development work. However, I have struggled to find an alternative solution that fixes this problem. Answers on a postcode if you think there\u0026rsquo;s a better way of doing this, and I will happily update this post and provide full credit for any solution that gets around this; but otherwise, looks like we are stuck with using this!\nThe Solution Checker is a fantastic tool to have in your arsenal to provide you with an automated, bespoke way of flagging common issues with your Dynamics 365 / Power Apps solution. Granted, it will never take the place of a formal code review alongside a senior developer. Still, it can be a potential lifesaver if you find yourself performing solo development work targeting the application. Give it a go if you haven\u0026rsquo;t already and, hopefully, if you see yourself getting the same recommendation outlined in this post, you now know what to do to get your code sorted.\n","date":"2020-09-20T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/avoiding-parallel-execution-within-power-apps-common-data-service-plug-ins/","title":"Avoiding Parallel Execution within Power Apps / Common Data Service Plug-ins"},{"content":"Welcome to the final post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In today\u0026rsquo;s post, I wanted to consolidate all of the content from the series into a single, concise post for ease of access. I\u0026rsquo;ll also provide some general advice and tips that I hope will come in useful for when you sit the exam.\nThis series has aimed to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nThe MB-400 exam is split into several different areas, based on the specification found here. For each applicable subject, I have linked below to the relevant blog/video content from the series.\nCreate a Technical Design (10-15%) Validate requirements and design technical architecture Skills Measured design and validate technical architecture design authentication and authorization strategy determine whether requirements can be met with out-of-the-box functionality determine when to use Logic Apps versus Power Automate flows determine when to use serverless computing vs. plug-ins determine when to build a virtual entity data source provider vs. when to use connectors Create a data model Skills Measured design a data model Blog Posts Exam MB-400 Revision Notes: Creating a Technical Design with the Power Platform\nUseful Resources Common Data Model Overview What is Azure Logic Apps? Get Started with Power Automate Use plug-ins to extend business processes Get started with virtual entities Custom connectors for canvas apps Configure Common Data Service (15-20%) Configure security to support development Skills Measured troubleshoot operational security issues create or update security roles and field-level security profiles Implement entities and fields Skills Measured configure entities configure fields configure relationships Create and maintain solutions Skills Measured configure solutions import and export solutions manage solution dependencies Blog Posts Exam MB-400 Revision Notes: Configuring the Common Data Service\nVideos MB-400 Exam Prep: Creating Entities, Fields and Relationships MB-400 Exam Prep: Working with Solutions MB-400 Exam Prep: Security Roles \u0026amp; Field Security Profiles Useful Resources Entity overview Fields overview Entity relationships overview Solutions overview Create and Configure Power Apps (10-15%) Create model-driven apps Skills Measured configure a model-driven app configure forms configure views configure visualizations Create Canvas Apps Skills Measured configure a Canvas App develop complex expressions Blog Posts Exam MB-400 Revision Notes: Working with Model-Driven Apps\nExam MB-400 Revision Notes: Working with Canvas Apps\nVideos MB-400 Exam Prep: Building a Model-Driven App https://youtu.be/EEBTENhcQ-0\nMB-400 Exam Prep: Creating a Canvas App Useful Resources What are model-driven apps in Power Apps? Create and design model-driven app forms Understand model-driven app views Track your progress with dashboards and charts What are canvas apps in Power Apps? Configure business process automation (10-15%) Configure Power Automate Skills Measured configure a Flow configure actions to use Common Data Service connectors develop complex expressions Implement processes Skills Measured create and configure business process flows create and configure business rule Blog Posts Exam MB-400 Revision Notes: Using Power Automate Flows\nExam MB-400 Revision Notes: Mapping a Business Process with Business Process Flows\nExam MB-400 Revision Notes: Implementing Business Rules\nVideos MB-400 Exam Prep: Introduction to Power Automate Flows MB-400 Exam Prep: Creating a Business Process Flow MB-400 Exam Prep: Working with Business Rules Useful Resources Create a flow that uses the Common Data Service Use expressions in conditions to check multiple values Business process flows overview Apply business logic in Common Data Service Extend the user experience (15-20%) Apply business logic using client scripting Skills Measured configure supporting components create JavaScript or Typescript code register an event handler use the Web API from client scripting Create a Power Apps Component Framework (PCF) component Skills Measured initialize a new PCF component configure a PCF component manifest implement the component interfaces package, deploy, and consume the component use Web API device capabilities and other component framework services Create a command button function Skills Measured create the command function design command button triggers, rules, and actions edit the command bar using the Ribbon Workbench modify the form JavaScript library dependencies Blog Posts Exam MB-400 Revision Notes: Implementing Client-Side Scripting on Model Driven Power Apps\nExam MB-400 Revision Notes: Introduction to Power Apps Component Framework (PCF) Controls\nExam MB-400 Revision Notes: Working with Command Buttons\nVideos MB-400 Exam Prep: Deploying a Basic JavaScript Form Function MB-400 Exam Prep: Setting up your environment for Power Apps Component Framework Control Development MB-400 Exam Prep: Developing, Testing and Deploying a Power Apps Component Framework (PCF) Control MB-400 Exam Prep: Creating Command Buttons Using the Ribbon Workbench Useful Resources Apply business logic using client scripting in model-driven apps using JavaScript Client-side JavaScript using Web API in model-driven apps Power Apps component framework overview Customize commands and the ribbon Extend the platform (15-20%) Create a plug-in Skills Measured debug and troubleshoot a plug-in develop a plug-in use the global Discovery Service endpoint optimize plug-ins for performance register custom assemblies by using the Plug-in Registration Tool create custom actions Configure custom connectors for Power Apps and Flow Skills Measured create a definition for the API configure API security use policy templates Use platform APIs Skills Measured interact with data and processes using the Web API optimize for performance, concurrency, transactions, and batching perform discovery using the Web API perform entity metadata operations with the Web API use OAuth with the platform APIs Blog Posts Exam MB-400 Revision Notes: Building, Deploying \u0026amp; Debugging Plug-ins using C#\nExam MB-400 Revision Notes: Building a Custom Connector for Power Apps \u0026amp; Power Automate\nExam MB-400 Revision Notes: Working with the Dynamics 365 Web API\nVideos MB-400 Exam Prep: Building a C# Plug-in Using Visual Studio 2019 MB-400 Exam Prep: Deploying a C# Plug-in Using the Plug-in Registration Tool MB-400 Exam Prep: Debugging a C# Plug-in Using Trace Logging MB-400 Exam Prep: Debugging a C# Plug-in Using the Plug-in Registration Tool MB-400 Exam Prep: Building a Custom Connector Using Azure Cognitive Services MB-400 Exam Prep: Authenticating to the Dynamics 365 Web API MB-400 Exam Prep: Working with the Dynamics 365 Discovery URL MB-400 Exam Prep: Retrieving Entity Metadata using the Web API MB-400 Exam Prep: Performing Batch Operations using the Web API Useful Resources Use plug-ins to extend business processes Tutorial: Write and register a plug-in Create your own actions Custom Connectors Use the Common Data Service Web API Develop Integrations (10-15%) Publish and consume events Skills Measured publish an event by using the API publish an event by using the Plug-in Registration Tool register a webhook create an Azure event listener application Implement data synchronization Skills Measured configure and use entity change tracking configure the data export service to integrate with Azure SQL Database create and use alternate keys Blog Posts Exam MB-400 Revision Notes: Publishing and Consuming Events\nExam MB-400 Revision Notes: Implementing Data Synchronisation with the Data Export Service\nVideos MB-400 Exam Prep: Posting Dynamics 365 Events to Azure Service Bus MB-400 Exam Prep: Registering \u0026amp; Consuming a Dynamics 365 Webhook MB-400 Exam Prep: Deploying the Dynamics 365 Data Export Service Useful Content Azure Integration What is Azure Service Bus? Configure Azure Integration with Common Data Service Data Synchronization Data export service General Exam Preparation Tips Hands-on preparation is essential if you wish to do well in the exam. You should ideally set up a trial environment that you can use to experiment with the core functionality within Dynamics 365 / the Power Platform. You can watch this great video from Microsoft\u0026rsquo;s Chris Huntingford, where he will show you how to deploy an extended trial of Dynamics 365 Online and Microsoft 365 E5. These subscriptions will give you everything you need. Keep abreast of the latest Microsoft Docs and Learning Path materials that are related to this exam, as the platform is continually changing all of the time. Due to the ongoing COVID-19 situation, you will more than likely have to sit your exam using the online proctored experience. Take some time to familiarise yourself with the process involved here, and perform a system test well in advance of your exam date - the last thing you need on the day of the exam is to stress out due to a system or access issue. At the time of writing this post, Microsoft has announced the deprecation of the MB-400 exam in January 2021. I\u0026rsquo;ve discussed these changes in-depth previously on the blog and what it means if you\u0026rsquo;ve got an exam booked over the next few months. I would still advise that you sit this exam while it is available, as it will count towards any certification pathway that you are working towards. Additionally, there will be more content (and I don\u0026rsquo;t necessarily mean my own here 🙂 ) available covering this exam, which will provide better aid as part of your revision. I hope that you\u0026rsquo;ve found this series useful. Good luck when sitting the exam and let me know how you got on!\n","date":"2020-09-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-series-roundup/","title":"Exam MB-400 Revision Notes: Series Roundup"},{"content":"Welcome to the fifteenth and penultimate post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. Previously, we examined what events are and how they can provide a streamlined mechanism for executing complex business logic outside of Dynamics 365 online / the Common Data Service. In today\u0026rsquo;s post, we conclude our review of the Develop Integrations area of the exam, by taking a look at our final exam topic - Implement data synchronisation. For this, Microsoft expects candidates to demonstrate knowledge of the following:\nImplement data synchronization\nconfigure and use entity change tracking configure the data export service to integrate with Azure SQL Database create and use alternate keys For a long time now, Microsoft has provided tools that can perform simple or complex integrations involving data that resides within the Common Data Service database. In some situations, the out of the box capabilities provided to us can negate the need to implement solutions to track data changes, lookup entity records based on alternative identifiers or create replica versions of our data within a new environment. In discussing the topics listed above today, we will see how these features can achieve these objectives and more.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. As we will also be covering topics such as Microsoft Azure, Azure SQL and Azure Key Vault as well, having some awareness of how to work with these tools is recommended as well.\nKeeping Cloud Systems Synchronised: Key Challenges and How the Data Export Service Can Help Although the great benefit of adopting a business applications platform like Dynamics 365 or the Power Platform is that you can significantly simplify the types of technical integrations you need to perform, there will always be situations that necessitate some form of a bespoke integration. No organisation is in the \u0026ldquo;perfect\u0026rdquo; position in having all of their core applications hosted with the same vendor or even within the cloud. As a consequence, we then have to build highly specific integrations, that may involve standing up a holding/staging database. This option can introduce several benefits, by allowing us to run complex Structured Query Language (SQL) Data Manipulation Language (DML) statements targeting our data or allow us to audit changes as they are processed. However, the main drawback of this approach is that we must not only spend time setting up this database, but also write secure and scalable code to pull data from our Dynamics 365 / Common Data Service environment.\nTo help overcome these challenges, Microsoft makes available the Data Export Service Managed Solution, which allows us to straightforwardly synchronise our Dynamics 365 Online / Common Data Service database with a self-hosted SQL database on Azure. All of this is configurable through the interface of the application, reducing the need to leverage a heap of custom code to accomplish what may be a simplistic integration. Using the tool, developers can:\nSynchronise a variety of entities, using custom schema/table prefixes (if needed). Export out data for native many-to-many (N:N) relationships. Provide near-real-time synchronisation capabilities, that processes delta changes within 5-10 minute intervals. Scale and control the performance of the integration directly from within Azure. Access numerous options for inspecting synchronisation errors and scheduling records for re-synchronisation. Incorporate any bespoke application alongside the Data Export Service by integrating alongside the exposed API endpoint. The Data Export Service can also have other uses outside of integrations. It could be that you need to assure the business that you are backing up core business data regularly to a separate geographic region. Or, it could be that you wish to leverage complex Transact-SQL (T-SQL) queries as part of your reporting solution, that is impossible to replicate via FetchXML. In these contexts, the Data Export Service becomes an attractive option to consider.\nTo get started with using the Data Export service, you need to ensure you have a few things set up already within Microsoft Azure. You can consult the relevant Microsoft Docs article to find out more. To summarise though, you will need to have a valid Azure SQL / SQL Server instance on a Virtual Machine that is ready to connect to and appropriate access at the subscription level to create new resources. You will also need to ensure you have enabled change tracking capability for each entity you wish to synchronise out with the service, by navigating into the classic interface and ticking the appropriate option on the Entity configuration page:\nChange Tracking is most relevant in the context of the Data Export Service and, for the exam itself, the above is all you likely need to content yourself with as part of your revision. However, it may be useful to review how developers can use the Web API to retrieve changes for an entity enabled for Change Tracking and also leverage it using the SDK and, specifically, via the RetrieveEntityChangesRequest class.\nDemo: Deploying \u0026amp; Configuring the Dynamics 365 Data Export Service Working through all of the required steps to successfully configure the Data Export Service can be a little daunting. With this in mind, check out the video below, where I take you through each step:\nYou can find a copy of the PowerShell script I use on the Microsoft Docs site.\nAlternate Keys: What are they, and why should I use them? Typically, when you are integrating alongside an external system, it will contain unique record identifiers that differ from any the globally unique identifier (GUID) for each Common Data Service record. It could be that this record identifier is an integer or string value of some kind, or it could be a mixture of different attributes (e.g. using both the address and name of the customer, we can tell whether the record is unique or not). Due to the myriad of different systems available not just through Microsoft, but other vendors too, it can be a challenge to find a straightforward solution to handle this.\nEnter stage left alternate keys, a useful feature we can enable on one or several attributes on an entity. By creating them, you are in effect informing Dynamics 365 Online / the Common Data Service that the contents of this field must always be unique within the system. After an alternate key is defined, the application will create a unique index in the database, covering the attributes you specify. Therefore, the hidden benefit of this is that queries targeting alternate key values will always run faster. At the time of writing this post, it is possible to use both the Power Apps maker portal and classic interface to create alternate keys; where possible, I would advise you to use the maker portal. An additional benefit with alternate keys is that the SDK supports out several capabilities that developers can easily leverage. For example, it\u0026rsquo;s possible to create new records by specifying its alternate key value and execute Web API Retrieve requests using one or multiple Alternate Key values.\nKeep in mind some of the limitations concerning alternate keys:\nThe platform enforces some limitations not only on the size of the alternate key (in bytes) but also on whether certain illegal Unicode characters exist in the field value. If any attribute value breaks these rules, errors will likely occur. You can only have a maximum of five per entity. The application relies on a system job to create the appropriate index for each newly defined alternate key. The processing time for this job can take a considerable amount of time to complete, depending on the size of your database. You can navigate into the classic interface at any time to track progress and also evaluate whether any errors have occurred. You are limited to the following data types when defining an attribute as an alternate key: Date and Time Decimal Lookup Option Set Single Line of Text Whole Number And that\u0026rsquo;s a wrap\u0026hellip; \u0026hellip;you might expect me to say at this stage. True enough, we\u0026rsquo;ve now covered all of the content within the MB-400 exam. What I wanted to do in next week\u0026rsquo;s post though is perform a round-up post that collects together all of the posts/videos published during the series, as well as sharing some general advice relating to the exam. Catch you again next week! 🙂\n","date":"2020-09-06T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-implementing-data-synchronisation-with-the-data-export-service/","title":"Exam MB-400 Revision Notes: Implementing Data Synchronisation with the Data Export Service"},{"content":"Welcome to the fourteenth post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. Last time around, we explored some of the capabilities offered by the Dynamics 365 Web API, that can be particularly useful when you are building integrations targeting Dynamics 365 / the Common Data Service. This post finished up our discussion on the Extend the platform area of the exam, which has a 15-20% total weighting. We now move into the final exam area, Develop Integrations and our first subject area concerning events:\nPublish and consume events\npublish an event by using the API publish an event by using the Plug-in Registration Tool register a webhook create an Azure event listener application Although this area of the exam has a somewhat low weighting (10-15%) when compared with the other subjects we\u0026rsquo;ve looked at, I would urge longstanding Dynamics CRM on-premise developers to pay particular attention here. Events are just one way in which you can integrate your Dynamics 365 Online / Common Data Service instance alongside Microsoft Azure. Developing modern, cloud applications involving core Microsoft products invariably means you must have a general awareness of the capabilities within Azure, so this exam area provides an excellent opportunity to increase your knowledge in this area as well. Let\u0026rsquo;s dive in now to see what events are all about and how they relate to Azure!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. It would help if you also have some understanding working with plug-ins and the Plug-in Registration Tool as well, which we\u0026rsquo;ve covered already in the series.\nWhat are Events? You may be worried at this stage that events are a whole new concept that will take considerable time to understand. Fortunately, that\u0026rsquo;s not the case at all and, if you have good familiarity working with the IExecutionContext Interface from within a plug-in, you will find yourself right at home. Events encapsulate all of the data points that we would typically work with in our execution context - whether that be shared variables, output parameters, Business Unit ID or more. An example of how an event can look, when passed out as a JSON object, can be seen below:\n{ \u0026#34;BusinessUnitId\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;CorrelationId\u0026#34;: \u0026#34;dfe79039-5a08-4d0a-b4ee-9534625c8654\u0026#34;, \u0026#34;Depth\u0026#34;: 1, \u0026#34;InitiatingUserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;InitiatingUserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;InputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;Target\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;Entity:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;john@domain.com\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;jobtitle\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Manager\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;fullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;ownerid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 0 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;createdon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;/Date(1598771723000)/\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;transactioncurrencyid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;785af3f0-d976-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;transactioncurrency\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedby\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedonbehalfby\u0026#34;, \u0026#34;value\u0026#34;: null }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;lastname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;firstname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;createdby\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;yomifullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;owningbusinessunit\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;businessunit\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;/Date(1598771723000)/\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;exchangerate\u0026#34;, \u0026#34;value\u0026#34;: 1.0 } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Send\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Active\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;createdon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2020-08-30T07:15:23-00:00\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Active\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Any\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Allow\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default Value\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;modifiedon\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2020-08-30T07:15:23-00:00\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;No\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Morning\u0026#34; } ], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: \u0026#34;4198516\u0026#34; } } ], \u0026#34;IsExecutingOffline\u0026#34;: false, \u0026#34;IsInTransaction\u0026#34;: false, \u0026#34;IsOfflinePlayback\u0026#34;: false, \u0026#34;IsolationMode\u0026#34;: 1, \u0026#34;MessageName\u0026#34;: \u0026#34;Create\u0026#34;, \u0026#34;Mode\u0026#34;: 1, \u0026#34;OperationCreatedOn\u0026#34;: \u0026#34;/Date(1598771723000+0000)/\u0026#34;, \u0026#34;OperationId\u0026#34;: \u0026#34;b1b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;OrganizationId\u0026#34;: \u0026#34;9c5d5db0-47c7-4741-aa25-08eb4cdf59a3\u0026#34;, \u0026#34;OrganizationName\u0026#34;: \u0026#34;orgad623a9e\u0026#34;, \u0026#34;OutputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;id\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;OwningExtension\u0026#34;: { \u0026#34;Id\u0026#34;: \u0026#34;aa45df79-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;sdkmessageprocessingstep\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null }, \u0026#34;ParentContext\u0026#34;: { \u0026#34;BusinessUnitId\u0026#34;: \u0026#34;f64c9d1f-d076-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;CorrelationId\u0026#34;: \u0026#34;dfe79039-5a08-4d0a-b4ee-9534625c8654\u0026#34;, \u0026#34;Depth\u0026#34;: 1, \u0026#34;InitiatingUserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;InitiatingUserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;InputParameters\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;Target\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;Entity:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;john@domain.com\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;jobtitle\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Manager\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;lastname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;firstname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;creditonhold\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotfax\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotphone\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;followemail\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;preferredcontactmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;donotbulkpostalmail\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;transactioncurrencyid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;785af3f0-d976-ea11-a812-000d3a86a586\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;transactioncurrency\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;ownerid\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;EntityReference:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Id\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;systemuser\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;preferredappointmenttimecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;customersizecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;address2_freighttermscode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;educationcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isautocreate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;leadsourcecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;shippingmethodcode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;participatesinworkflow\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;marketingonly\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;territorycode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;isprivate\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;isbackofficecustomer\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;donotsendmm\u0026#34;, \u0026#34;value\u0026#34;: false }, { \u0026#34;key\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;haschildrencode\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;__type\u0026#34;: \u0026#34;OptionSetValue:http://schemas.microsoft.com/xrm/2011/Contracts\u0026#34;, \u0026#34;Value\u0026#34;: 1 } }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: null } }, { \u0026#34;key\u0026#34;: \u0026#34;SuppressDuplicateDetection\u0026#34;, \u0026#34;value\u0026#34;: false } ], \u0026#34;IsExecutingOffline\u0026#34;: false, \u0026#34;IsInTransaction\u0026#34;: false, \u0026#34;IsOfflinePlayback\u0026#34;: false, \u0026#34;IsolationMode\u0026#34;: 1, \u0026#34;MessageName\u0026#34;: \u0026#34;Create\u0026#34;, \u0026#34;Mode\u0026#34;: 1, \u0026#34;OperationCreatedOn\u0026#34;: \u0026#34;/Date(1598771723000+0000)/\u0026#34;, \u0026#34;OperationId\u0026#34;: \u0026#34;b1b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;OrganizationId\u0026#34;: \u0026#34;9c5d5db0-47c7-4741-aa25-08eb4cdf59a3\u0026#34;, \u0026#34;OrganizationName\u0026#34;: \u0026#34;orgad623a9e\u0026#34;, \u0026#34;OutputParameters\u0026#34;: [], \u0026#34;OwningExtension\u0026#34;: { \u0026#34;Id\u0026#34;: \u0026#34;aa45df79-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;sdkmessageprocessingstep\u0026#34;, \u0026#34;Name\u0026#34;: null, \u0026#34;RowVersion\u0026#34;: null }, \u0026#34;ParentContext\u0026#34;: null, \u0026#34;PostEntityImages\u0026#34;: [], \u0026#34;PreEntityImages\u0026#34;: [], \u0026#34;PrimaryEntityId\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;PrimaryEntityName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RequestId\u0026#34;: \u0026#34;bda24fe5-1f23-4c96-a4ab-34739a2f5628\u0026#34;, \u0026#34;SecondaryEntityName\u0026#34;: \u0026#34;none\u0026#34;, \u0026#34;SharedVariables\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;IsAutoTransact\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;DefaultsAddedFlag\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;ChangedEntityTypes\u0026#34;, \u0026#34;value\u0026#34;: [ { \u0026#34;__type\u0026#34;: \u0026#34;KeyValuePairOfstringstring:#System.Collections.Generic\u0026#34;, \u0026#34;key\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Update\u0026#34; } ] } ], \u0026#34;Stage\u0026#34;: 30, \u0026#34;UserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;UserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34; }, \u0026#34;PostEntityImages\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;AsynchronousStepPrimaryName\u0026#34;, \u0026#34;value\u0026#34;: { \u0026#34;Attributes\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;fullname\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;John Doe\u0026#34; }, { \u0026#34;key\u0026#34;: \u0026#34;contactid\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34; } ], \u0026#34;EntityState\u0026#34;: null, \u0026#34;FormattedValues\u0026#34;: [], \u0026#34;Id\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;KeyAttributes\u0026#34;: [], \u0026#34;LogicalName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RelatedEntities\u0026#34;: [], \u0026#34;RowVersion\u0026#34;: null } } ], \u0026#34;PreEntityImages\u0026#34;: [], \u0026#34;PrimaryEntityId\u0026#34;: \u0026#34;a2b1d88b-90ea-ea11-a815-000d3a86a3ce\u0026#34;, \u0026#34;PrimaryEntityName\u0026#34;: \u0026#34;contact\u0026#34;, \u0026#34;RequestId\u0026#34;: \u0026#34;bda24fe5-1f23-4c96-a4ab-34739a2f5628\u0026#34;, \u0026#34;SecondaryEntityName\u0026#34;: \u0026#34;none\u0026#34;, \u0026#34;SharedVariables\u0026#34;: [ { \u0026#34;key\u0026#34;: \u0026#34;IsAutoTransact\u0026#34;, \u0026#34;value\u0026#34;: true }, { \u0026#34;key\u0026#34;: \u0026#34;DefaultsAddedFlag\u0026#34;, \u0026#34;value\u0026#34;: true } ], \u0026#34;Stage\u0026#34;: 40, \u0026#34;UserAzureActiveDirectoryObjectId\u0026#34;: \u0026#34;00000000-0000-0000-0000-000000000000\u0026#34;, \u0026#34;UserId\u0026#34;: \u0026#34;c164a5aa-765e-4181-8771-537e8b1ebf3b\u0026#34; } The main difference with events is how we consume them; that is, outside of the application, as opposed to inside. There are a variety of reasons why it may be desirable to do this:\nAs we saw when discussing plug-ins, there are some particular limitations that sandbox processing can impose on your custom code - such as the restricted use of third party libraries and the 2-minute maximum execution time. By processing these operations external from Dynamics 365 / the Common Data Service, developers can circumvent these restrictions, while still benefitting from working with the same type of information typically available as part of a standard plug-in. For situations where your instance is processing hundreds or even thousands of record changes each hour, that then need to be processed asynchronously, events provide the most streamlined mechanism for achieving this. Also, it helps to reduce the reliance on the platform and the applications Asynchronous service in performing complex processing of these requests; instead, they can be straightforwardly \u0026ldquo;passed on\u0026rdquo; to a dedicated service responsible for this. It may be necessary to immediately trigger an external endpoint or API as soon as a user creates, updates or deletes a record. By using events alongside Webhooks (more on this shortly), developers have a streamlined mechanism to meet this requirement. With the recent changes announced around API request and service protection limits at the platform level, developers may start to struggle when using the traditional plug-in assembly route to process complex operations. As such, we can realise benefits by moving this processing outside of the application and, as part of this, avoid hitting any nasty error messages resulting from an overage in the number of processed platform requests. In most cases, you will typically use one of several different Microsoft Azure services when processing events received from the application. However, there are options available to integrate alongside other cloud platforms or systems. For the exam, focusing and having an awareness of the Azure options will be essential.\nUnderstanding Service Endpoints \u0026amp; the Azure Service Bus There are two core concepts to understand alongside events - service endpoints and the Azure Service Bus:\nService Endpoints: This defines the location where you wish to write your events out into, for further processing. In pretty much all cases, you will want to use the Plug-in Registration Tool to create your service endpoint, but you can also deploy one programmatically via the web API. You can also use a service endpoint to receive incoming requests back into Dynamics 365 for processing. Once defined, you must then register the appropriate steps that will trigger your endpoint request, much in the same way as defining a plug-in step (e.g. Update on Contact, Delete on Lead, etc.). An advantage with this is that we can include pre/post entity images as part of each event payload. The critical thing to remember, though, is that we must specify these as asynchronous operations; synchronous calls are not allowed. In most cases, your service endpoint will target an Azure Service Bus queue, but you can also configure an Azure Event Hub endpoint as well. Azure Service Bus: Depending on the type of integration you are attempting to perform, the synchronous (i.e. all at once) processing of information may not be needed or desirable. This could be down to the single fact that the number of potential requests to process will be too high. For when this is relevant, Azure Service Bus comes into the equation, by offering a decoupled, asynchronous mechanism to receive, analyse and route multiple requests through to an intended destination - whether this is a database, another endpoint or a storage location. The service bus will process all events it receives in order, holding onto each request for as long as is necessary before handing it off. As well as allowing for a predictable flow of information, it can also provide assurances from a failure standpoint; if, for example, the backend endpoint goes offline for whatever reason, messages will remain in the queue and resume processing as soon as the endpoint comes back online. Developers have flexibility over the type of Azure Service Bus listener service to implement, which Microsoft outlines in this article, but the most common scenario is to use a queue. We can also define the format of events that get written out the Azure Service Bus listener but, in most cases, outputting this as JSON will be the recommended option. Demo: Posting Dynamics 365 Events to Azure Service Bus In this video, we\u0026rsquo;ll walk through how to deploy out an Azure Service Bus resource and configure a service endpoint to receive requests from Dynamics 365:\nWebhooks Overview A webhook is a lightweight mechanism for integrating multiple Web API\u0026rsquo;s. It operates on a publish and subscribe model; namely, we publish an event out, and an external endpoint subscribes to each event, processing it as it sees fit. As semi-automated, standard HTTP requests in their simplest form, they conform broadly to this pattern and - if they are a new concept - you should experience little difficulty in understanding them if you\u0026rsquo;ve previously worked with RESTful Web API\u0026rsquo;s. Webhooks are commonplace these days, both in the Microsoft world and across other vendors as well. For example, Azure supports the ability to write out any log alert event as a webhook to an endpoint of your choosing.\nFrom a Dynamics 365 / Common Data Service perspective, developers can register custom webhooks to any endpoint of their choosing. We can handle authentication into these endpoints in one of four ways:\nHttpHeader: Here, we must declare the appropriate header key/value pair values that will allow us to authenticate into the endpoint. HttpHeader is the option you will need to go for if you are using Basic or OAuth 2.0 authentication via an authorization bearer token. WebhookKey: For this option, the platform will append a query string parameter called code onto the URL, whose value then allows you to authenticate into the API. If your endpoint is an Azure Function, then this is a possible candidate option for you to consider using, as all requests targeting your function will expect this by default. HttpQueryString: Best suited for endpoints that support shared access signature (SAS) authentication, developers specify the appropriate key/value pairs for this, in the same manner as the HttpHeader option. The main difference here is that the platform will instead add these values onto the endpoint URL as query parameters. Anonymous: Finally, it is possible to call any endpoint that does not enforce authentication. To do this, ensure you select the HttpHeader authentication type and supply a single key/value pair containing anything you like; otherwise, you may get errors when registering the Webhook. Then, similar to working with service endpoints, we define the appropriate steps and Pre/Post images that will trigger the webhook call. The resulting operation will then initiate an HTTP POST request to your endpoint, containing the event data illustrated in the example earlier.\nAn important question arises around the usage of Webhooks in comparison to the Azure Service Bus. Certainly, Webhooks are the most attractive option to consider if you are integrating alongside non-Azure based services, your expected volume of API calls are low, or you need your request to execute synchronously. However, they will ultimately only be as reliable as the endpoint that you are contacting. Also, for high volume requests where we can tolerate a delay in processing, Webhooks will likely fall over pretty quickly. When this occurs, the Service Endpoint and Azure Service Bus option represent your optimal choice instead.\nFor more information on how to work with Webhooks, consult this Microsoft Docs article, which also provides some examples for you to work through.\nDemo: Registering \u0026amp; Consuming a Dynamics 365 Webhook To see an example of how to configure and mock-up a basic webhook, check out the below video, which uses the convenient Request Bin tool:\nWe\u0026rsquo;re on the home stretch now, with only one more topic to look at before we wrap up the series. Next time around, we\u0026rsquo;ll look at how you can enable some specific capabilities on your Dynamics 365 / Common Data Service entities to support data synchronisation to an Azure SQL Database.\n","date":"2020-08-30T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-publishing-and-consuming-events/","title":"Exam MB-400 Revision Notes: Publishing and Consuming Events"},{"content":"You can discover all sorts of useful capabilities if you dive your nose deep enough into the Dynamics 365 Online / Common Data Service SDK. And, given that these products are evolving continually within the cloud, new features are being introduced all the time. With this in mind, it\u0026rsquo;s perhaps natural to expect that even the so-called \u0026ldquo;experts\u0026rdquo; of these platforms may sometimes overlook new or seemingly obvious changes\u0026hellip;😅 I\u0026rsquo;m unsure which of these categories the focus of today\u0026rsquo;s blog post fits into it but, given that Microsoft has not documented its capabilities very well, I thought I\u0026rsquo;d take some time to explore it further.\nWe\u0026rsquo;ve seen previously on the blog how to use the GetAttributeValue method, as being the \u0026ldquo;nice\u0026rdquo; way of accessing the various attributes you may need to work with as part of your C# code targeting Dynamics 365 / the Common Data Service. I won\u0026rsquo;t repeat too much old ground here but, suffice to say, I would highly recommend you always use this method as much as possible. As much as I like it though, one issue it does have is that you must always know the underlying data type that you are attempting to process. If, for example, you\u0026rsquo;re working with a defined list of attributes, but without their corresponding types declared, this method becomes pretty useless. Consider the following code snippet:\nList\u0026lt;string\u0026gt; myAttributes = new List\u0026lt;string\u0026gt;(); myAttributes.Add(\u0026#34;name\u0026#34;); myAttributes.Add(\u0026#34;description\u0026#34;); myAttributes.Add(\u0026#34;industrycode\u0026#34;); Entity e = Service.Retrieve(\u0026#34;account\u0026#34;, new Guid(\u0026#34;9e03afdc-5d37-e911-a97c-0022480749f0\u0026#34;), new ColumnSet(true)); foreach(string a in myAttributes) { string aVal = e.GetAttributeValue\u0026lt;string\u0026gt;(a); tracer.Trace(aVal); } This code works fine and dandy up until it hits the industrycode value in the myAttributes list, as this is the name of an Option Set, not a Single Line of Text (i.e. string) field. If a user has not populated this field with a value, then the code will silently skip over this and return a null value, but if it isn\u0026rsquo;t, you will get an error thrown with the following message:\nAn error has occurred: Unable to cast object of type \u0026lsquo;Microsoft.Xrm.Sdk.OptionSetValue\u0026rsquo; to type \u0026lsquo;System.String\u0026rsquo;.\nIn this situation, therefore, it becomes desirable to test our incoming attributes to confirm they are of the correct type before we try to access their values.\nEnter stage left the TryGetAttribute method. You can probably make a reasonable guess as to what this is doing from its name alone; namely, that it allows us to attempt to obtain an attribute\u0026rsquo;s value into our desired type, without necessarily throwing a nasty error if we screw up at all. What\u0026rsquo;s more, we can use the method to still obtain our attribute value, in much the same way as GetAttributeValue does, provided we have made the correct assumption regarding its underlying type. The method requires us to declare an object that will potentially store our attribute value when we call it. Apart from this main difference, it works pretty much as we\u0026rsquo;d hope it to - as the tweaked example below demonstrates:\nList\u0026lt;string\u0026gt; myAttributes = new List\u0026lt;string\u0026gt;(); myAttributes.Add(\u0026#34;name\u0026#34;); myAttributes.Add(\u0026#34;description\u0026#34;); myAttributes.Add(\u0026#34;industrycode\u0026#34;); Entity e = Service.Retrieve(\u0026#34;account\u0026#34;, new Guid(\u0026#34;9e03afdc-5d37-e911-a97c-0022480749f0\u0026#34;), new ColumnSet(true)); foreach(string a in myAttributes) { string strVal; OptionSetValue osVal; bool tryStr = e.TryGetAttributeValue\u0026lt;string\u0026gt;(a, out strVal); bool tryOS = e.TryGetAttributeValue\u0026lt;OptionSetValue\u0026gt;(a, out osVal); if (tryStr) tracer.Trace(strVal); if (tryOS) tracer.Trace(osVal.Value.ToString()); } As we can see, all we need to do is declare a Boolean type, through which our code stores the results of the TryGetAttributeValue method. Using this, we can then determine whether to output the value strVal or osVal to our trace log, which will store value obtained by TryGetAttributeValue, provided that this did not result in false when called. If needed, we can extend the solution out to support all possible data types within Dynamics 365 / Common Data Service and, in the process, not worry about implementing a solution to figure out the correct data types or make subsequent API calls to determine this information. In short, the TryGetAttributeValue method provides a flexible and easy means of working with many different types of attributes simultaneously, that may be subject to change at runtime or difficult to infer easily. What\u0026rsquo;s more, the method shares the same benefit as GetAttributeValue by defaulting to a null value if the attribute, or its underlying value, does not exist, avoiding any needless errors from cropping up in your code.\nI guess you could say I derive a weird sense of pleasure whenever I discover a cool new feature as part of Dynamics 365/ the Common Data Service and, as you might have guessed, TryGetAttributeValue very much fits into this category. I\u0026rsquo;m unsure when Microsoft added this method to the SDK or, indeed, if it\u0026rsquo;s something that I have naively overlooked for many years now. Regardless, I would urge you to consider using it within your code if the right circumstances arise, and I hope this blog post has been useful in showing you how to use it in practice. 🙂\n","date":"2020-08-23T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/working-with-the-trygetattributevalue-method-dynamics-365-common-data-service-sdk/","title":"Working with the TryGetAttributeValue Method (Dynamics 365 / Common Data Service SDK)"},{"content":"Welcome to post number thirteen in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. Last week, we took a deep dive look into custom connectors, a powerful feature within Power Automate and canvas Power Apps that allows us to expose out our bespoke APIs for consumption within these services. Today, we finish off our discussion on the Extend the platform area of the exam, by discussing how we can Use platform APIs. For this subject, Microsoft expects candidates to demonstrate knowledge of the following:\nUse platform APIs\ninteract with data and processes using the Web API optimize for performance, concurrency, transactions, and batching perform discovery using the Web API perform entity metadata operations with the Web API use OAuth with the platform APIs When we refer to the Web API, we mean the one offered out by Dynamics 365 / the Common Data Service, which allows developers to execute a variety of different operations targeting entity records, organisational settings or various customisation settings. As a powerful tool in a developers arsenal when performing integrations, having a good understanding of the API is useful not just for exam MB-400, but also more generally as well. With this in mind, let\u0026rsquo;s take a look at what it is and the types of things we can do with it.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nWeb API Overview There will always be situations where you need to perform complex integration activities involving Dynamics 365 / the Common Data Service. Wherever possible, you should be considering how to use tools such as Power Automate flows to achieve these needs, as they provide a much simpler, streamlined mechanism for accommodating what may, at first look, appear to be a problematic integration between two systems. Despite the capabilities in Power Automate flows, they do have their limits. For example, if you\u0026rsquo;re developing a bespoke .NET application that needs to update data into the system, having to pass this through a flow can make your solution unnecessarily verbose. Also, this type of implementation has several security concerns, as you\u0026rsquo;d need to expose an HTTP endpoint with a simplified security layer. There may also be situations where you need to detect various metadata properties from your Dynamics 365 / Common Data Service entities so that you can replicate these into an external system that\u0026rsquo;s synchronising data. For this situation, and for where a Power Automate flow just won\u0026rsquo;t cut it, the Web API represents our most optimal route for interacting with the application.\nThe main benefit of the Web API is that it utilises open standards - namely, the Open Data Protocol (OData) version 4.0 standard - that supports a variety of different programming languages or tools. Developers can use it to perform RESTful API requests, using operations that developers should commonly understand. What\u0026rsquo;s more, working with the Web API does not require a complicated set of integrated development environments (IDE\u0026rsquo;s); indeed, we can run many Web API requests using just a modern internet browser. The Web API supports pretty much every type of CRUD (Create, Update or Delete) operation and, where necessary, requests conform to the established security model that Dynamics 365 / the Common Data Service provides us. For example, if the user calling the API does not have delete permissions on the Contact entity, then this operation will be blocked if attempted. In short, the Web API is a powerful tool at your disposal when performing deep integrations between external systems or in meeting requirements that cannot be met by other features available within the Power Platform.\nHandling Authentication As with any Software as a Service (SaaS) solution, developers will need to provide a valid set of authentication credentials if they wish to work with the Web API. Much like how we access the application via the user interface, Microsoft controls Web API access via Azure Active Directory (AAD) and, more accurately, through the OAuth 2.0 protocol. As such, developers have flexible means to authenticate into the Web API, that can also satisfy any security concerns. All requests going into the Web API require a valid access token generated from AAD. To create this, developers must do a few things within their AAD tenant, including setting up an App Registration. From there, additional configuration may be required, depending on your scenario:\nIf you wish to use an interactive login prompt with your Microsoft 365 credentials, then ensure that you have enabled implicit flow on your App Registration, and set your grant_type to implicit. To authenticate using an Application User account, you will need to generate a secret for your App Registration and perform the appropriate setup steps within the classic Dynamics 365 interface. Then, use the grant_type value of client_credentials. I\u0026rsquo;ve discussed the process and advantages of using an Application User account previously on the blog. They should always be your first port of call over using non-interactive user accounts. Microsoft provides several different Azure Active Directory Authentication Libraries (ADAL) client libraries, which cover a variety of different platforms and provide a streamlined mechanism of generating the appropriate access tokens for authenticating. Use these to your advantage.\nDemo: Authenticating to the Dynamics 365 Web API To better understand the steps involved when authenticating into the Web API using the implicit grant flow, check out the video below, where I take you through each stage:\nUsing the Discovery URL For most Dynamics 365/Power Platform deployments, an organisation will typically have multiple different environments set up on the same tenant, all of which meet specific purposes - perhaps one environment for testing, another for production and a backup of production for upgrade testing. Developers, therefore, sometimes need to inspect and determine the correct environment details that they want to connect to. Also, it may be desirable for your bespoke application to automatically provide the list of all available environments to a user for selection, based on giving user credentials as opposed to a valid URL that the user may not now.\nTo address both of these needs, Microsoft provides developers with two specific Discovery URL\u0026rsquo;s, that we can use to interrogate details about all instances that the authenticated user can access. The first of these is the global discovery URL. This URL is the recommended one that developers should be using from March 2020 onwards and, if you have a multi-region deployment of Dynamics 365 / the Common Data Service, details of these instances will also be returned. The URL for this is as follows:\nhttps://globaldisco.crm.dynamics.com/api/discovery/v2.0/Instances\nBy using a valid access token and accessing this endpoint via a GET request, you can list full details for all instances the user account has access to. And, because it is an OData endpoint, we can execute specific queries to return just the information we need.\nThe second URL available is known as the regional discovery URL. As its name indicates, these are scoped to a specific Dynamics 365/Common Data Service geographic location and, as such, will only return details of organisations scoped to that particular region. For example, by using the URL below, you can get the details of all organisations the user account has access to within the UK region, crm11:\nhttps://disco.crm11.dynamics.com/api/discovery/v9.1/\nOther than that, the endpoint is similar to the global one but does return a reduced subset of data. On March 2020 this year, Microsoft announced that this URL is now deprecated and will be removed entirely in March 2021. As such, I would not recommend that you use this for new projects and that you make plans to migrate to the global URL.\nDemo: Working with the Dynamics 365 Discovery URL In this next video, I\u0026rsquo;ll show you how to work with the Discovery URL to return information relating to your Dynamics 365 / Common Data Service instances:\nNOTE: This video uses the regional, as opposed to global, discovery URL. As noted above, this endpoint is now deprecated. To switch across to use the global discovery URL, modify the following values within your Postman environment:\nurl: https://globaldisco.crm.dynamics.com version: 2.0 Then, you should be able to use the following URL to return information:\nhttps://globaldisco.crm.dynamics.com/api/discovery/v2.0/Instances\nWorking with the Web API After you have discovered the URL for your organisation, you can then determine the precise Web API endpoint that will accept your requests. The format of this URL will generally resemble the following:\nhttps://.api..dynamics.com/api/data/v9.1/\nMicrosoft provides some further details on that may assist you in constructing this for your specific scenario.\nMuch like the discovery URL\u0026rsquo;s, the Web API is a fully compliant OData v4 endpoint, therefore providing us with a great degree of flexibility in terms of the types of requests it will accept. For example, the following URL as a GET request would return us the top 25 Account records in the system, ordered by the Created On attribute:\nhttps://.api..dynamics.com/api/data/v9.1/accounts?$top=25\u0026amp;$orderby=createdon\nHaving a good awareness of all possible types of operations supported by the Web API will be essential if you plan to take the MB-400 exam. So let\u0026rsquo;s explore some of the supported operation types:\nCreating Records: Adding new records of any potential entity type that the user calling the API has permission to is supported. We can also create related entity records as part of the same operation, enforce any relevant duplicate detection rules and return full details of any new entity record to the caller. Retrieving Records: As we have seen already, we can return multiple different entity records, but also specific ones as well. We can even enable some useful options, such as returning formatted values for option sets/lookups, extended properties from related records (such as a specific attribute) or return data based on an alternate key value. Update or Delete Records: These do pretty much as you\u0026rsquo;d expect them to, but you have some convenient options available that allow you to update or remove the value of a single attribute. Also, you can perform Upsert operations as well, i.e. insert the record if it does not exist, otherwise, just update it. Perform Association/Disassociation Actions: Rather than performing an Update operation, you can instead call specific methods on the API to create or remove entity relationships. Merge Records: This action allows us to merge Account, Contact, Case, and Lead records via the Web API, using the same mechanism that is available within the user interface. Other entity types are not supported. Call Functions: Dynamics 365 exposes a range of different bound and unbound functions, such as the CalculateRollupField function, most of which can be called directly from the Web API. Call Actions: We touched upon Actions briefly before in the series when looking at plug-ins. In much the same way as Functions, we can call bound and unbound actions through the Web API. These provide a summary of the most common types of operations you might need to perform. It is also possible to carry out user impersonation, detect duplicate data, retrieve data from predefined entity views and - as we will see shortly - perform batch operations too. Developers can also use the Web API to detect various metadata properties for entities, attributes and relationships, which we\u0026rsquo;ll touch upon in a second. All this detail might seem like a lot to take in for the exam itself but, considering that this portion of the exam will work out to be approximately 5% of what you are ultimate assessed on, don\u0026rsquo;t worry yourself too much studying the minute detail of each operation type.\nDemo: Retrieving Entity Metadata using the Web API To get a good flavour on how to work with the Web API to retrieve Entity Metadata, check out the video below:\nUsing the Web API for Batch Operations For those coming from a SQL database background, you will no doubt be familiar with the concept of atomicity when executing a set of SQL statements. For example, let\u0026rsquo;s assume we have 3 Transact-SQL (T-SQL) statements - 2 INSERT statements and 1 UPDATE. All three of these scripts must complete successfully when run; otherwise, we could leave our database in an undesirable state. Now, we could just wrap around some logic that, says, deletes the 2 INSERTed records if our UPDATE statement fails, but this is like using a sledgehammer to crack a walnut. Instead, we can wrap all of the queries into a single transaction. If one statement fails, then all potential changes are rolled back, and we return to the same state the database was in before our SQL statements even touched our database. This occurrence is, in effect, an atomic transaction; a set of operations that must all succeed in unison or fail together.\nBecause Dynamics 365 / the Common Data Service uses SQL Server as its underlying database, the concept of transactions applies in many situations. We saw this already when looking at the event pipeline and, in particular, how records return to their previous state if a plug-in hits an error for whatever reason. To help us in providing some control over how the platform performs multiple actions within a transaction, the Web API exposes the capability to run batch operations, often referred to as a changeset request. These are purely atomic in their nature and execution, thereby allowing developers to execute up to 1000 related requests as part of the same transaction. And, if anything goes wrong as part of this, the database reverts to its previous state. The construction of a batch request must conform to a specific format (a type of text document) and be sent across to the following URL:\nhttps://.api..dynamics.com/api/data/v9.1/$batch\nPretty much all of the operations discussed previously are supported as part of a batch request, and they also support some useful features such as parameters. For situations where you need to process a complex set of changes coming in from an external system, they are a useful feature to consider using.\nDemo: Performing Batch Operations using the Web API Given the complexity around constructing batch operations, the last video below goes through this subject in detail, explaining how to build a simple changeset request and execute this against the Web API:\nGeneral Performance Tips When working with any API, it is always essential to model your requests so that they perform optimally. The Dynamics 365 Web API is no different in this regard, so keep in mind the following as you start working with it:\nTake advantage of the various OData query parameters to limit the number of records returned when performing record retrievals. Where possible, use a $filter parameter and restrict the number of records returned via the $top or $skip options to page large result sets. You should also use $select query parameters to bring back only the attributes you are most interested in. When attempting to expand related entity records via a $expand request, keep in mind the performance impact this has and the hard limit of 10 imposed by the platform. API requests are subject to the throttling and request limits imposed by the platform more generally. Be sure to read and understand the relevant Docs articles, which explain both the request limits/allocations and service protection limits that Microsoft impose at the tenant level. These will typically be dictated based on the number of licenses you have purchased on the Microsoft 365 portal. There are several different Header values that you should always include as part of your requests, which I\u0026rsquo;ve outlined below. Doing this helps to prevent any ambiguity, should new versions of the endpoint be released in future: Accept: application/json OData-Version: 4.0 OData-MaxVersion: 4.0 As we\u0026rsquo;ve seen previously in the blog when running client-side JavaScript from model-driven app forms, a specific method is exposed out to allow us to execute Web API requests. You should always use this method in this context, and never perform the types of operations described in this article from client-side JavaScript/Typescript form functions. And with that, we come to the end of the Extend the platform area of the MB-400 exam. Next time, we will start diving into some of the integration options available within the platform, as we review how to publish and consume events from Dynamics 365 / the Common Data Service.\n","date":"2020-08-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-working-with-the-dynamics-365-web-api/","title":"Exam MB-400 Revision Notes: Working with the Dynamics 365 Web API"},{"content":"Welcome to the twelfth post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In the last post, we saw how developers could leverage C# code as part of plug-ins to implement complex business logic within Dynamics 365. We now move on to examine another extensibility component within the Extend the platform section of the exam, as we see how to Configure custom connectors for Power Apps and Flow. For this, Microsoft expects candidates to demonstrate knowledge of the following subjects:\nConfigure custom connectors for Power Apps and Flow\ncreate a definition for the API configure API security use policy templates Depending on the type of system you wish to integrate alongside Dynamics 365 or the Power Platform, a custom connector may suit your needs, thereby allowing you to leverage an existing API you have constructed or an entirely new one. What\u0026rsquo;s more, developers can then take these custom connectors and make them available more generally to their customer base. In short, they can be an incredibly powerful feature to leverage. So let\u0026rsquo;s take a look at what they are all about and how you can get started with them.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nCustom Connectors Overview As we have seen when evaluating both canvas Power Apps and Power Automate flows, developers can use well over 325+ standard connectors, covering a variety of different Microsoft and other third-party systems, such as Salesforce, SAP and Oracle Database. For most situations, this should give you everything you need to get disparate systems talking to each other within the Power Platform. Custom connectors are there to provide additional headroom for when this is not possible to achieve. Powered by Azure API Management underneath the hood, Custom Connectors provide a guided, visual mechanism of exposing an entirely custom API, alongside its various operations, as an additional connector for the following services:\nCanvas Power Apps Power Automate Flows Logic Apps Developers can then utilise these in much the same manner as the other connectors discussed previously. For example, you can define trigger actions from your API, pull in entire datasets or perform set operations, such as updating an existing record. The capabilities within your current API ultimately dictate all of this so, in some situations, developers may need to author a new API themselves and use services such as Azure Functions or Azure Web Apps to host it. Further discussion on how to do this is beyond the scope for this blog post and, indeed, the exam itself; candidates should instead focus their attention towards the various supported options for connectors and - most crucially - identify the best scenario in which to use them.\nSupported Scenarios So this being the case, we should touch upon some of the situations where you may need to consider using a custom connector:\nIntegrating with On-Premise APIs: As custom connectors fully support the use of the on-premises data gateway, they provide the securest and most streamlined mechanism for making any internal APIs accessible for use within your cloud systems. Incorporating REST / SOAP APIs: It is worth noting that custom connectors only support APIs of these particular types. Specifically, canvas Power Apps and Power Automates can only work with HTTP REST APIs; Logic Apps support SOAP APIs as well. For situations where your existing API does not meet these requirements, it will be impossible for you to use them with custom connectors. Sharing an API within an Organisation: For situations where you envision only contacting an API a single time within, let\u0026rsquo;s say, a Power Automate flow, using a single HTTP action step may be the most prudent (and fastest) option to consider. Going beyond this, if you think you will need to share out or utilise the same action steps across multiple flows and, also, you wish to provide a simplified means of accessing the same set of actions, a custom connector will fit your needs. ISV Solution Development: Many of the custom connectors available today have their origin as an internally developed one, which has then been certified and published out by Microsoft for general use. By doing this, ISV developers can help to drive further usage of their existing solutions and even introduce them to an entirely new audience. Reviewing Custom Connector Creation Options Developers have a few different options when creating a custom connector. But, almost always, you will start from within the Power Apps Maker Portal. Then, by navigating to Data -\u0026gt; Custom Connectors, you can locate the appropriate option to kick off their creation:\nThe best thing about this is that we don\u0026rsquo;t necessarily need to reinvent the wheel, provided we have existing definitions of our API\u0026rsquo;s already built out. At the time of writing this post, developers can utilise the following, if available:\nPostman: If you\u0026rsquo;ve never used this tool before and find yourself regularly poking into numerous API\u0026rsquo;s, then this tool could help you out. In short, the app provides you with a local \u0026ldquo;sandbox\u0026rdquo; to mock-up, execute and inspect the results of various HTTP actions. It has numerous useful features, such as being able to store requests in collections, handling multiple different types of authentication and also providing online synchronisation capabilities. From a custom connector standpoint, you can use Postman to build out the various action requests that your API exposes out and then import these in as a collection file into the Power Apps Maker Portal. Note as part of this that only V1 Postman collections are supported. There\u0026rsquo;s a great article on the Postman website that shows you how to get started with collections. OpenAPI: Similar in some respects to a Postman collection, an OpenAPI definition is instead intended as a complete declaration of all the capabilities stored within an API, using an open standard. Developers would typically generate this for their API after creation, to provide other developers with a valuable resource when working with the API in question. Thankfully, if you are using Azure API Management as the backend for your API, you can very quickly generate the appropriate specification using the Azure Portal. It is also possible to build this directly into your ASP.NET Core app if that is what your API is using; I\u0026rsquo;m sure it\u0026rsquo;s possible to do the same for other languages/stacks too, but I\u0026rsquo;ll leave you to find that out on your own. 🙂 For the options listed in the screenshot above, it\u0026rsquo;s worth focusing on each of these in more detail:\nCreate from blank: Using this option, you must define each of the individual elements of your API, such as its security, actions and triggers. If you are unsure where to start and don\u0026rsquo;t have an appropriate Postman/OpenAPI definition at your disposal, then this will be the most suitable option to choose. Import an OpenAPI file: With this, you can import the OpenAPI JSON definition file from your local computer. Import an OpenAPI from URL: If your definition is instead available as part of a publicly accessible URL, use this option to import it instead. Import a Postman Collection: This choice bears some similarities to the OpenAPI options, as it will let you import a JSON collection file generated from Postman. As mentioned earlier, make sure you export this as a V1 definition from the Postman app. Regardless of which option you select, you can then populate some standard settings for your connector, such as it\u0026rsquo;s display name, image and description. At this point, you can also override the URL scheme, the host URL and the Base URL, if required:\nHandling API Security A custom connector can be secured using multiple mechanisms. These authentication profiles do not store sensitive credentials within the connector. Instead, they define the authentication options that users of the connector must set when using your custom connector for the first time. In this manner, it is, therefore, possible to connect to multiple instances of your API on the same tenant, using different credentials. A custom connector can support the following four types of authentication methods:\nNo Authentication: Best for when you are exposing a publically available API that requires no underlying authentication. This option, quite obviously, provides zero security for your API. Basic Authentication: Utilising the same authentication method as the defined HTTP standard, here you outline details of the user name and password parameter labels that users must provide for your custom connector. I would typically advise against using Basic Authentication where-ever possible, as it doesn\u0026rsquo;t afford the best security for your custom connector or API itself. API Key: Here, you provide the label details for an API Key, that the custom connector includes as part of either the header or as a query parameter on the URL itself. Many of the Azure Cognitive Services API\u0026rsquo;s use API Key\u0026rsquo;s as their authentication mechanism. API Key\u0026rsquo;s suffer from the same security failings as Basic Authentication and, as such, should be avoided unless necessary. OAuth 2.0: The recommended and most secure option for your custom connector, and also one that integrates neatly alongside Azure Active Directory (AAD) and a variety of other services, such as Google and Facebook. To see an example of how to set this up using an Azure App Registration for an Azure Function, you can consult the following Microsoft Docs article. In short, though, using AAD OAuth 2.0 requires you to set up an app registration on the appropriate tenant, which then exposes out the details you can use when defining your profile. Working with Policy Templates If you\u0026rsquo;ve had some experience working Azure API Management or have been following my blog religiously, then this topic might feel somewhat familiar. To further fine-tune how your custom connector ultimately interacts with your API, you can perform a variety of different manipulation activities as each action/trigger gets fired. These are brought together within a list of re-usable templates, that cover common operations that may be necessary and, from an Azure API Management standpoint, map back to the various policy definitions that this service supports. The list below provides an overview of the types of actions policy templates support:\nSet host URL: Using this, you can replace the URL for the request with an entirely new one. Route request: Let\u0026rsquo;s you route the action to a completely different path on the same URL from the one specified. Set HTTP header: Allows you to append, skip or replace a Header value as part of the API request. For example, you could supply a Cache-Control value to help improve performance when the custom connector uses your API. Set query string parameter: Allows you to append, skip or replace a query parameter value as part of the API request. For example, if your API is an OData endpoint, you might want to append on a default $top query parameter to limit the number of results that are returned each time. At the time of writing this post, there are also a couple of additional policy templates available marked as Preview, that allows you to do things such as converting objects into arrays and vice-versa. Microsoft will not assess candidates on preview features within exams, so it is unlikely that you will need to know about these as part of your exam prep or, indeed, before the exam is retired at the end of this year.\nDemo: Creating a Custom Connector for Azure Cognitive Services Although not mentioned within the exam specification, it is useful to see how to work with Actions and how to create a custom connector from start to finish. With this in mind, check out the video below, where I take you through each of these steps, using the Azure Translator API:\nThat pretty much wraps it up for the basics on custom connectors, which should be all you need to know for the exam. In the next post, we\u0026rsquo;ll finish off our discussion on the various extension capabilities of the platform, as we look at how you can leverage the platforms Web API to perform a variety of different operations.\n","date":"2020-08-09T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-building-a-custom-connector-for-power-apps-power-automate/","title":"Exam MB-400 Revision Notes: Building a Custom Connector for Power Apps \u0026 Power Automate"},{"content":"Welcome to the eleventh post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. For those following the series, apologies again for the mini-hiatus. In fact, it\u0026rsquo;s been so long that Microsoft has now announced that MB-400 will be going the way of the Dodo at the end of the year. Despite this, I will be continuing and, all being well, finishing off this blog series for the following reasons:\nPeople may still choose to sit the current exam while they still can. For this reason, the series may yet have some value once completed entirely. Looking at the Skills Measured document for the new replacement PL-400: Microsoft Power Platform Developer exam, and there is a lot of crossover in content. Therefore, the existing posts and accompanying videos may still have some merit in the new state of play. I\u0026rsquo;ll be able to make an appropriate determination of this after Microsoft release PL-400 and I\u0026rsquo;ve been able to sit it. But, hopefully, some of this content will remain useful for at least another year or so. I set out to run this series to completion, no matter what it takes 🙂 So with this in mind, I hope you continue to stick around for future posts and that they provide you with a useful tool as part of any revision or learning you are doing.\nLast time around in the series, we took a deep-dive look into command buttons, as well as demonstrating how valuable the Ribbon Workbench tool is in helping us to fine-tune aspects of a model-driven Power App\u0026rsquo;s interface. This topic rounded off our discussion of the Extend the user experience area of the exam, meaning that we now move into the Extend the platform section. This area has equal weighting to Extend the user experience (15-20%), and the first topic concerns how we Create a plug-in. Specifically, candidates must demonstrate knowledge of the following:\nCreate a plug-in\ndebug and troubleshoot a plug-in develop a plug-in use the global Discovery Service endpoint optimize plug-ins for performance register custom assemblies by using the Plug-in Registration Tool create custom actions Plug-ins have been a mainstay within Dynamics 365, and its predecessor Dynamics CRM, for well over a decade now. Longstanding developers are, therefore, probably going to be well-versed in how to build them, but it\u0026rsquo;s always useful to get a refresher of old topics. With that in mind, let\u0026rsquo;s dive in!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. I would also recommend that you have a good, general knowledge of how to work with the C# programming language before approaching plug-in development for the first time. Microsoft has published a whole range of introductory material to help you learn the language quickly.\nWhat is a Plug-in? In the series so far, we\u0026rsquo;ve already touched upon the following, functional tools that developers may leverage when dealing with complex business requirements:\nBusiness Rules: These provide an excellent mechanism for handling simple logic, targeting both model-driven app forms and also platform-level operations. Power Automate Flows: Using this, you can typically go the extra mile when compared with classic workflows, thereby allowing you to integrate multiple systems when processing asynchronous actions. JavaScript: For situations where Business Rules can\u0026rsquo;t meet your particular requirement, and you need specific logic to trigger as a user is working on a model-driven form, this is the best tool in your arsenal. All of these are great, but there may be situations where they are unsuitable, due to the sheer complexity of the business logic you are trying to implement. Also, some of the above tools do not support the ability to carry out synchronous actions (i.e. ones that happen straight away, as the user creates, updates, deletes etc. records in the application). Finally, it may be that you need to work with some specific elements of the SDK that are not exposed out straightforwardly via alternative routes. In these situations, a custom-authored plug-in is the only solution you can turn too.\nSo what are they then? Plug-ins allow developers to write either C# or VB.NET code, that is then registered within the application as a .NET Framework class library (DLL) and executed based on specific conditions you specify within its configuration. For example, when a user creates a Contact, retrieve the details of the parent Account record and update the Contact so that these details match. Developers will use Visual Studio when authoring a plug-in, typically using a class library project template. The Dynamics 365 SDK provides several modules that expose out the variety of different operations that a plug-in can support. Some of the other things that plug-ins support include:\nBoth synchronous and asynchronous execution. Custom un-secure/secure configuration properties that can modify the behaviour of a plug-in at runtime. For example, by providing credentials to a specific environment to access. Pre and Post Entity Images, snapshots of how a record and its related attributes looked before and after a platform level operation occurs. For specific operations, such as Update, plug-ins can also support filtering attributes - basically, a list of defined fields that, when modified, will cause the plug-in to trigger. Being able to specify the execution order for one or multiple plug-in steps. This capability can be useful when you need to ensure a set of steps execute in your desired order. These days, thanks to tools such as Business Rules and Power Automate, we see the lessening importance of plug-ins and, typically, you would want to avoid considering their usage straight out the gate. However, they still have a place and, when used appropriately, become the only mechanism you can resort to when working with complicated business processes.\nUnderstanding Messages, the Execution Pipeline \u0026amp; Steps Before we start diving into building a plug-in for the first time, it is useful to provide an overview of the three core concepts that every plug-in developer needs to know:\nMessages: These define a specific, platform level operation that the SDK exposes out. Some of the more commonly used Messages within the application include Create, Update or Delete. Some system entities may have their own set of unique Messages; CalculatePrice is an excellent example of this. From a plug-in perspective, developers essentially \u0026ldquo;piggyback\u0026rdquo; onto these operations as they are performed and inject their custom code. The following Microsoft Docs provides a detailed list of all supported messages and their corresponding entities and is well worth a read as part of preparing for the exam. Execution Pipeline: As a user triggers a particular message, the application processes it using a defined set of stages, known as the execution pipeline. These are discussed in detail on this Microsoft Docs article, but the key ones we need to be aware of are: PreValidation: At this stage, the database transaction has not started. Also, the application has not yet performed any appropriate security checks; potentially meaning that the operation may fail if the user does not have the correct security privileges. PostValidation: Here, the database transaction has already started. The platform knows at this stage that no security constraints are preventing the operation from completing, but the transaction may still fail for other reasons. PostOperation: Although the database transaction has still not completed at this stage, the core operation of the message (executed within the MainOperation stage) will have completed by this stage. A failure at any of these stages will cause the database transaction to rollback entirely, returning the record(s) to their original state. Now, from a plug-in perspective, the execution pipeline is exposed out to developers as stages where your custom code can execute. This can provide developers with a high degree of flexibility and capability when it comes to running their custom code. As a general rule of thumb, developers would use each of these stages under the following circumstances: PreValidation: Use this stage to perform checks to cancel the operation. PostValidation: This stage is useful for when you need to modify any values, based on some kind of business logic. Doing this at this stage would also prevent triggering another platform Message. PostOperation: Use this stage for when you need to carry out additional logic not related to the current record or potentially provide additional information back to the caller after the platform completes the core operation. Developers will typically need to give some thought towards the execution pipeline and the most appropriate one to select, based on the requirements being worked with. Steps: Simply writing a plug-in class and deploying out the library is not sufficient to trigger your custom logic. Developers must additionally provide a set of \u0026ldquo;instructions\u0026rdquo;, commonly known as Steps, that tells the application when and where to execute your custom code. It is here where both the Message and Execution Pipeline come into play, and you would always specify this information when creating a Step. Additional info you can include here includes the execution order, whether the plug-in will execute synchronously or not and the display name of Step when viewed within the application. We will see shortly how these topics come into play, as part of using the Plug-in Registration Tool.\nBuilding Your First Plug-In: Pre-Requisites Before you start thinking about building your very first plug-in, you need to make sure that you have a few things installed onto your development machine:\nVisual Studio: I would generally recommend using either Visual Studio 2017 or 2019 when developing for Dynamics 365 online. If you don\u0026rsquo;t have a Visual Studio/MSDN subscription, then the Community Edition can be used instead. A Dynamics 365 / Common Data Service Tenant: Because where else are you going to deploy out and test your plug-in? 🙂 Knowledge of C# / VB.NET: Attempting to write a plug-in for the first time without at least a basic grasp of one of these languages will impede your progress. Plug-in Registration Tool: To deploy your plug-in out, you will need access to this tool as well. We will cover this off later in the post. Demo: Creating a Basic Plug-in using Visual Studio 2019 \u0026amp; C# The best way to learn how to create a plug-in is to see someone build one from scratch. In the YouTube video below, I talk through how to build a straightforward plug-in using Visual Studio 2019 and C#:\nFor those who would prefer to read a set of instructions, then this Microsoft Docs article provides a separate tutorial you can follow instead.\nUsing the Plug-in Registration Tool Once you\u0026rsquo;ve written your first plug-in, you then need to consider how to deploy this out. In most cases, you will use the Plug-in Registration Tool to accomplish this. Available on NuGet, this lightweight application supports the following features:\nVaried access options for when working with multiple Dynamics 365 environments, both online and on-premise. The ability to register one or multiple plug-in assemblies. The registering of plug-in steps and images, including the various settings, discussed earlier. Via the tool, you can install the Plug-in Profiler, an essential tool when it comes to remote debugging your plug-ins; more on this capability later. The Plug-In Registration Tool is also necessary for when you are deploying out other extensibility components, including Service Endpoints, Web Hooks or Custom Data Providers. We will touch upon some of these later on in the series. For this topic area and the exam, you must have a good general awareness of how to deploy and update existing plug-in assemblies.\nDemo: Deploying a Basic Plug-in using the Plug-in Registration Tool In this next video, I\u0026rsquo;ll show you how to take the plug-in developed as part of the previous video and deploy it out using the Plug-in Registration Tool:\nThere is also a corresponding Microsoft Docs tutorial that covers off these steps too.\nDebugging Options for Plug-ins Plug-ins deployed out into Dynamics 365 must always run within sandbox execution mode. As a result, this imposes a couple of limitations (some of which I\u0026rsquo;ll highlight in detail later on), the main one being that it greatly hinders the ability to debug deployed plug-ins easily. To get around this, Microsoft has provided two mechanisms that developers can leverage:\nPlug-in Trace Logging: Using this, developers can write out custom log messages into the application, at any point in their code. This can be useful in identifying the precise location where a plug-in is not working as expected, as you can output specific values to the log for further inspection. You can also utilise them to provide more accurate error messages for your code that you would not necessarily wish to show users as part of a dialog box. Getting to grips with trace logging is easy - it\u0026rsquo;s just a few lines of code that you need to add to your project - and you can find out more about how to get started with it on the Microsoft Docs site. Plug-in Registration Tool \u0026amp; Profiling: While trace logging is undoubtedly useful, there will be situations where you need something more. For example, it may become desirable to breakpoint code, inspect values/properties as they are processed through and determine when your plug-in hits specific conditions or error messages. For these situations, the Profiler comes into play, by allowing developers to \u0026ldquo;playback\u0026rdquo; their code execution using Visual Studio and the Plug-in Registration Tool. We mentioned the Plug-in Profiler tool earlier, which is a mandatory requirement as part of this and must be deployed out to your instance first. From there, you can generate a profile file that allows you to rewind execution within Visual Studio. Developers will typically use both of these debugging methods in tandem when trying to figure out issues with their code. The first option provides a friendly, unobtrusive mechanism of inspecting how a plug-in has got on, with the Profiler acting as the nuclear option when the problem cannot be discerned easily via trace logging alone. Understanding the benefits/disadvantages of both and how to use them will be essential as part of your exam preparation. With this in mind, check out the two demo videos below that show you how to work with these features in-depth:\nDemo: Debugging a Basic Plug-in using Trace Logging Demo: Debugging a Basic Plug-in Using the Plug-in Registration Tool General Performance / Optimization Tips Anyone can build and deploy a plug-in, but it can take some time before you can do this well. Here are a few tips that you should always follow to ensure your plug-ins perform well when deployed out:\nAlways keep in mind some of the limitations as part of sandbox execution for your plug-ins, including: The 2 minute limit on execution time. Restriction on the use of specific 3rd party DLL\u0026rsquo;s, such as Newtonsoft.Json Various restrictions around accessing operating system level information, such as directories, system state etc. For situations where sandbox limitation will cause issues in executing your business logic, you will need to consider moving away from a plug-in and adopting another solution. Filtering attributes provide a great way of ensuring your code only executes when you need it to. You should always use these wherever possible. Make sure to disable any plug-in profiling and remove the Profiler solution once you are finished. Active profiles can considerably slow down performance, and the Profile solution can also introduce unintended dependencies on core entities, causing difficulties when moving changes as part of a solution file. The Solution Checker provides an excellent mechanism for quality checking your code and can give some constructive recommendations on where your plug-in can be improved. You should always run this at least once before moving your plug-in out into other environments. Read through the following Microsoft Docs article and take care to follow the suggestions it outlines. Don\u0026rsquo;t Forget\u0026hellip;Custom Actions \u0026amp; Global Discovery Service Endpoint As Microsoft has included them on the exam specification, it\u0026rsquo;s worth talking about these two topics briefly. However, only a general awareness of them should be sufficient, and I wouldn\u0026rsquo;t devote too much of your revision time towards them.\nWe\u0026rsquo;ve already looked at Messages in-depth and seen how they can act as a \u0026ldquo;gateway\u0026rdquo; for developers to bolt on specific logic when an application-level event occurs. Custom Actions allow you to take this a step further, by creating SDK/API exposable, custom Messages. For example, you could combine the Create/Update Messages of the Lead/Opportunity entities into a new message called SalesProgress. Plug-ins or operations targeting the Web API could then work with or trigger actions based on this. Custom Actions have been available within the application for many years now, and many developers continually argue them as being one of the most underrated features in Dynamics 365. They are also fully supported for use as part of Power Automate flows too. In short, they can be incredibly useful if you need to group multiple default messages into a single action, that can then be called instead of each one.\nFinally, it is worth touching upon how developers use the Global Discovery Endpoint from a plug-in standpoint, but we must first touch upon the concepts of early-binding and late binding. In the video demos above, I wrote all of the code out using the late-binding mechanism. What this means is that, instead of declaring an object for the specific entity I wanted to work with (such as Contact), I instead used the generic Entity class and told the code which entity it\u0026rsquo;s supposed to represent. Which is fine, and gives me some degree of flexibility\u0026hellip;but it does ultimately mean that I won\u0026rsquo;t detect any issues with my code (such as an incorrect field name) until runtime. Also, I have no easy way to tell how my entity looks using Intellisense; instead, I must continuously refer back to the application to view these properties. To get around this, we can use a mechanism known as early-binding, where all of the appropriate entity structures are generated within the project and referenced accordingly. The CrmSvcUtil.exe application provides a streamlined means of creating these early-bound classes and, as you might have guessed, you need to use the Global Discovery Endpoint to generate these classes successfully. There have been many previous wars and mounds of dead developers debates online regarding which mechanism is better. While early-binding does afford some great benefits, it does add some overhead into your development cycle, as you must continuously run the CrmSvcUtil application each time your entities change within Dynamics 365. All I would recommend is to try both options, identify the one that works best for you and, most importantly, adopt your preferred solution consistently.\nPlug-ins can help when modelling out complicated business logic, that is impossible to achieve via other functional tools within the Power Platform. I hope this post has provided a good insight into their capabilities and to help you in your revision. In the next post, I\u0026rsquo;ll show you how you can extend Power Automate and Power Apps via custom API connectors, thereby allowing you to connect to a myriad of different systems.\n","date":"2020-08-02T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-building-deploying-debugging-plug-ins-using-c/","title":"Exam MB-400 Revision Notes: Building, Deploying \u0026 Debugging Plug-ins using C#"},{"content":"With Microsoft Inspire 2020 happening earlier this week, we naturally saw a whole host of new announcements, with Business Applications, the Power Platform and the Common Data Service Microsoft Dataflex forming a core part of this. As a result of the COVID-19 crisis, tools such as Microsoft Teams have become increasingly invaluable, by allowing teams to work together collaboratively, while still being many miles or continents apart. As many organisations contemplate long-term adoption of Teams, it becomes natural that we start to see efforts to bring low code application development, hosted within the confines of a proper relational database, as an add-on feature within Teams. While there is much to consume and potentially litigate over within these announcements, I wanted to home in on some specific news that may have slipped under the radar - namely, some of the upcoming changes relating to Business Application certifications. In the following blog post published on Tuesday, we got a first peek into some new exams, that will become available to sit on or after September 2020:\nMB-800: Microsoft Dynamics 365 Business Central Functional Consultant PL-200: Microsoft Power Platform Functional Consultant PL-400: Microsoft Power Platform Developer All of these will earn you a corresponding Associate Certification once passed; previously, associate-level certifications have typically required at least two exam passes to obtain, so this change makes it easier than ever before to earn a Microsoft certification:\nDynamics 365 Business Central Functional Consultant Associate Power Platform Functional Consultant Associate Power Platform Developer Associate Now some of you may be thinking at this stage \u0026ldquo;Hey! Isn\u0026rsquo;t there already a developer AND functional consultant exam available for the Power Platform?\u0026rdquo;. Well, as part of this change, both MB-200 and MB-400 will be retired on December 31st 2020. Candidates can still sit them to earn credit towards each of their corresponding certifications until the end of 2021 at the latest.\nAs with any change, it is always important to sit back and review them in detail. With this in mind, I thought I\u0026rsquo;d share some of the things I\u0026rsquo;ve dissected after evaluating the new certifications more closely.\nConsidering what\u0026rsquo;s happened on the Azure side of things, a change like this was probably inevitable. For those working in the Azure space, certification upheaval has been the norm here for quite a while now. At the start of the year, pretty much every major Azure exam available was replaced with a new variant, which saw a staggered release cycle earlier this year. Beyond this, new Azure certification pathways have been popping up left, right and centre, covering roles such as solution architect and skill areas such as AI, Data Fundamentals and DevOps, to name but a few. I would speculate that Microsoft is gearing up for a \u0026ldquo;Round 2\u0026rdquo; when it comes to the various role and skill-based exams currently available for Business Applications. With this mind, Microsoft may have decided to start again with a clean slate. I wouldn\u0026rsquo;t be surprised if there are further certification changes ahead within the Business Application space, and that we see changes within the Microsoft 365 side of things too.\nDoes this herald the start of performance-based testing for Business Application exams? What I\u0026rsquo;m unsure about at this stage is whether we will start to see some of the performance-based aspects of the existing Azure exams finally come across into MB-800, PL-200 or PL-400. To summarise, these will be assessed sections where the exam automatically provisions an actual environment for you. From there, you must complete a specific set of tasks, with your given marks reflecting how you got on. If executed well, it does provide a more realistic mechanism of assessing a candidates knowledge using a particular system and in perhaps making the experience more engaging. Guess we will have to wait and see on this one.\nBusiness Central is finally getting some love. Dynamics 365 Business Central was released just over two years ago, so the fact that it is now getting a corresponding functional exam is welcome. However, it is impossible to validate this further at the time of writing this post, as the skills measured document for it has yet to be published. It\u0026rsquo;s also worth noting that Microsoft plans to release MB-800 after PL-200 and PL-400; October at the earliest, according to the website. Having done minimal work with Business Central in the past, I\u0026rsquo;m not well-positioned to make any further comment relating to this. Still, it does seem strange that Microsoft is not considering a developer variant of the exam at this juncture. Whether this is because the application does not support the same extensibility as the other Dynamics 365 applications, I\u0026rsquo;m not sure. It\u0026rsquo;s possible, therefore, that there\u0026rsquo;s a missing piece of the puzzle here.\nDon\u0026rsquo;t panic if you plan to sit MB-200 or MB-400 shortly. I know people and colleagues who will be sitting these exams soon, and my advice to them would be not to back out. As mentioned already, you will still earn progress towards your chosen certification. What\u0026rsquo;s more, the skills covered will be highly relevant to what is currently available within Dynamics 365 / the Power Platform today.\nComparing PL-200 to MB-200 At the time of writing this post, it\u0026rsquo;s not possible to do any form of detailed comparison here; similar to MB-800, the full Skills Measured document has yet to be published. However, we can learn from the website that PL-200 will assess candidates in the following broad areas:\nConfigure the Common Data Service Build a Power App Create automations with Power Automate Configure Power Virtual Agents Create visualizations with Power BI From here, we can see that Power Virtual Agents and Power BI - topic areas not even mentioned at all in MB-200 - will be covered in-depth. The introduction of these topics almost certainly makes this a much more Power Platform focused affair. It\u0026rsquo;s no longer about Dynamics 365 as an application within the Power Platform, but rather, all of the available tools within this that you can leverage to build your specific business application.\nComparing PL-400 to MB-400 On the PL-400 side of things, the Skills Measured document is available, meaning we can analyse the exam in-depth. In summary, there appears to be a broad similarity to MB-400 in the areas you need to learn. Some of the new topics include:\nPower Virtual Agents More focus on the extensibility aspects of other Power Platform solutions, such as Power BI and Power Apps Business Units and Teams Increased focus on solution deployments and implementing an Application Lifecycle Management (ALM) process around this How to work with App Checker and Solution Checker in Power Apps Microsoft has also chosen to drop some areas currently present in MB-400. For example, the data export service is no longer explicitly mentioned, and some elements relating to canvas Power Apps, such as building reusable component libraries, have been gone completely. Overall, I think candidates can expect a similar experience to MB-400, with some of the noticeable holes in the previous exam now being filled in duly.\nKeep Calm and Carry On It can often be challenging to keep up with the pace of changes occurring not only in the Microsoft Business Applications space but also more generally in IT these days. The best and only thing we can do is to embrace things openly, get stuck in and work any upheaval to our advantage. With this in mind, I\u0026rsquo;m eagerly looking forward to sitting these exams, finishing off my blog/video series on MB-400 and, all being well, transition this existing content across so that it applies to PL-400 after it launches.\nWhat are your thoughts on these new exam changes? Do you plan to sit them once they are released? Should more exams be made available covering Business Central? Let me know in the comments below! 🙂\n","date":"2020-07-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/some-thoughts-on-the-new-business-application-microsoft-certifications/","title":"Some Thoughts on the New Business Application Microsoft Certifications"},{"content":"Sometimes, when you find yourself customising Dynamics 365 or a model-driven Power App, you need to go the extra mile in fine-tuning how the interface looks. And while we can use themes or the site map designer to achieve most requirements in this area, it may become necessary to add, remove or modify individual buttons displayed within the application ribbon when a user accesses a dashboard, view or individual record. Doing this can sometimes be easier said than done though; and, as we have seen previously on the blog, these type of changes are something that developers, not functional specialists, may find themselves more comfortable doing. Fortunately, thanks to the Ribbon Workbench tool, a lot of the pain can be taken out of this. And, if for example, you need to remove some specific buttons that display within an entity record form, it\u0026rsquo;s just a simple case of right-clicking it within the Workbench and selecting the appropriate option:\nHowever, a colleague and I recently came across a situation where this was not a viable option. Specifically, we were trying to globally hide the following buttons that display on the Quote entity form:\nTried as we might, we couldn\u0026rsquo;t find a way of locating the appropriate button after digging through the Ribbon Workbench. It just wasn\u0026rsquo;t there. At this stage, it seemed like the only course of action was to crack open an XML editor and fiddle about with the raw ribbon definition instead to see if we could get the appropriate Hide action implemented. For those who have attempted this previously, I\u0026rsquo;m sure you\u0026rsquo;d agree that this is not something you\u0026rsquo;d readily recommend. Based on this and thinking perhaps that this was not an isolated issue for us alone, we did some research. We came across the following post on the Dynamics 365 communities site, which was incredibly illuminating. It turns out Dynamics 365 controls the visibility of these buttons as part of an application-level setting which is dead easy to switch off completely. All we needed to was to do was navigate across to the App Settings -\u0026gt; Overview area of the application, and click on the Manage button on the Convert to PDF option:\nAs we then see, we can enable/disable the functionality for most of the core entities within the Sales module:\nWith that done and dusted, the Quote form will no longer display these two buttons any more:\nNow, keep in mind, this will disable the described functionality across the whole entity. So if we did need to show/hide the button based on some kind of logic, I think your only remedy is to go into the XML itself and manually customise the Enable or Display rules applied against the following buttons:\nMscrm.Form.quote.EmailAsPDF Mscrm.Form.EmailAsPDF.Populate.Flyout Mscrm.Form.quote.CreatePDF However, attempt this at your own risk. From looking at how Microsoft has hooked up the various commands to these buttons, you have every chance of getting yourself into a complete mess, and I suspect that a future upgrade by Microsoft could wipe any changes you make. Better, I think, to simply use the settings above to disable the functionality altogether, if that is your ultimate end goal. 🙂\n","date":"2020-07-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/hiding-the-pdf-generation-buttons-in-dynamics-365-sales/","title":"Hiding the PDF Generation Buttons in Dynamics 365 Sales"},{"content":"During my time working in IT, I\u0026rsquo;ve seen and dealt with my fair share of bizarre issues - ones which I can easily term as \u0026ldquo;WTF\u0026rdquo; in both their cause and potential resolution. What\u0026rsquo;s worse is when they occur using a system that, ostensibly, you think you have a pretty good handle over. Clearly, despite often having years or decades or familiarity with something, you can never honestly claim to be a total expert in something - a state of affairs that you can derive both comfort and frustration out of in equal measure. Take Microsoft SQL Server as a prime example for me. As my \u0026ldquo;first love\u0026rdquo; when it comes to working with Microsoft technologies, I feel I have a good grasp of the subject area and, in particular, on how the concept of computed columns work. Columns of this type have their values generated on the fly, often deriving from a formula. So, for example, take the amount specified within Col1 and multiply it against Col2 in the same row. In their default state, computed column values are not stored within a table, unless you use the PERSISTED option when creating them. Regardless of how you use them, they provide database developers with a smooth and powerful way to generate semi-dynamic column outputs.\nRecently, I was working with a SQL Database project in Visual Studio, creating a table with a PERSISTED computed column. The definition for this table looked a little something like this:\nCREATE TABLE dbo.MyTable ( MyPKField NUMERIC(12, 0) PRIMARY KEY, MyIntegerField INT NOT NULL, MyPersistedColumn AS CAST(MyIntegerField AS NVARCHAR(20)) PERSISTED, ); To explain the \u0026ldquo;why\u0026rdquo; behind using the Persisted Column in the first place, this was so that the application linked to the database could more easily retrieve values when using a WHERE query targeting this field, via a Nonclustered Index I had setup. Attempting to perform any data type conversions as part of your queries can often force the generated query plan not to utilise any indexes you have specified. Therefore, you should pay attention to any unnecessary type conversions within your query and, as per this example, provide a column of the correct type in the source table.\nNow, to go back to the theme of bizarre issues\u0026hellip;in this case, whenever I deployed out the database project as a DACPAC deployment into SQL Server, the table was being dropped and recreated every single time. Given this table contained many millions of rows and had several different views, indexes etc. linked to it, this meant that each deployment also then had to DROP/CREATE these objects as well. This circumstance greatly inflated the deployment time for the database - partly because the publish action also recreates tables in a fashion that prevents data loss (provided you\u0026rsquo;ve enabled the Block incremental deployment if data loss might occurred option in your publish profile). It does this by creating a new table, inserting all records from the existing table into it and then performing a DROP/rename on the new table. All of this, in conclusion, contributes to the length of the deployment. So, in theory, you could treat all this as a minor annoyance. But in practice, having to sit around while it took close to an hour to deploy out all database changes is not something I\u0026rsquo;d want to inflict on myself or anyone else 🙂\nSo with all this in mind, I started doing some research. Initially, I came across the following post on StackExchange, with a solution that seemed sensible - namely, explicitly define the column as NOT NULL, like so:\nCREATE TABLE dbo.MyTable ( MyPKField NUMERIC(12, 0) PRIMARY KEY, MyIntegerField INT NOT NULL, MyPersistedColumn AS CAST(MyIntegerField AS NVARCHAR(20)) PERSISTED NOT NULL, ); Making this modification didn\u0026rsquo;t work though. So some more extensive investigation and, as it turns out, comparison action would be required to determine the root cause. Fortunately, if you are using SQL Server Data Tools (SSDT) within Visual Studio, there is a handy option available that lets us compare database schemas and produce a list of differences. As part of this, you can select an existing database, reference a database project or select a physical DACPAC file as part of your analysis. I won\u0026rsquo;t go into too much detail about this feature, as you can read up more about it on the Microsoft Docs website, but its a useful tool to consider when diagnosing problems like this. You can access it by navigating to Tools -\u0026gt; SQL Server -\u0026gt; New Schema Comparison\u0026hellip; within Visual Studio:\nUsing this tool, I ran a comparison against the deployed database and the database project and discovered something interesting. Can you spot this in the (heavily redacted) screenshot below?\nFor those who didn\u0026rsquo;t spot it, take a look at line 4 within the Object Definitions. For the database project, we have defined a CAST to perform the data conversion of the MyIntegerField. Yet the database itself is using a CONVERT instead. So, from the looks of it, the DACPAC deployment is creating the column using CONVERT instead. Given that these functions are virtually identical, as attested to by Microsoft, you can perhaps understand why a DACPAC deployment may choose to use one variant over the other. But regardless of this fact, the DACPAC deployment was detecting that the schema was different and creating the appropriate statements to DROP / CREATE the table again. Very bizarre!\nSo how to get around this then? By simply changing the CREATE TABLE script in the database project to use CONVERT instead, like so\u0026hellip;\nCREATE TABLE dbo.MyTable ( MyPKField NUMERIC(12, 0) PRIMARY KEY, MyIntegerField INT NOT NULL, MyPersistedColumn AS CONVERT(NVARCHAR(20), MyIntegerField) PERSISTED NOT NULL, ); \u0026hellip;the DACPAC deployment no longer detects that the destination schema has changed and stops DROPping the table each time.\nAs mentioned already, I would certainly class this as one of the most bizarre issues I have come across when working with SQL Server and Visual Studio. The only reason I can think of is that the DACPAC deployment prefers the use of CONVERT (which is a T-SQL specific function) over CAST because it has some enhanced options around date formatting which may be useful, depending on your scenario. Regardless, thanks to some of the valuable tools available as part of SSDT, we can diagnose issues like this and take the appropriate steps to get our database project fixed and, in the process, save us from waiting around ages while our deployments complete 🙂\n","date":"2020-07-12T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/sql-server-computed-columns-dacpac-deployments-why-is-my-table-dropped-each-time/","title":"SQL Server Computed Columns \u0026 DACPAC Deployments - Why Is My Table Dropped Each Time?"},{"content":"If you\u0026rsquo;ve done any serious work involving Entity Framework (EF) Core in the past, there\u0026rsquo;s a good chance that you\u0026rsquo;ve worked with Code First Data Annotations when modelling your classes in C#. To summarise, these provide you with an in-line mechanism of describing the various DDL objects that you are referencing from your underlying data source. Typically, this may be required when you need the names, attributes and other general properties of your classes to differ from what your data source is surfacing. They are also an almost mandatory requirement when you want EF Core to handle the creation of your database automatically when deploying out your project for the first time. While I do personally find this approach to be borderline heretical, it is nonetheless a valid means of modelling out a database that is intrinsically bound to your EF Core project. So don\u0026rsquo;t let me put you off from using it if it will meet your particular requirements.\nNow, it\u0026rsquo;s all well and good saying that something is good, but it\u0026rsquo;s far more important, I think, to see how it works in action. With this in mind, let\u0026rsquo;s assume we are adopting a pious approach for our EF Core project - namely, our database is already existing - and we are dealing with a mismatch between our database and class object names. Specifically, suppose we want to expose out a class called MyClass, but our underlying data source is a SQL Server table called MyTable. We would have to utilise the Table annotation, as demonstrated below, to ensure our code can navigate across to our intended location at runtime:\nusing System; using System.ComponentModel.DataAnnotations.Schema; namespace MyEFProject.Classes { [Table(\u0026#34;MyTable\u0026#34;)] public class MyClass { //Column definitions go here... } } We can extend this further if we happen to be using custom schemas within our database. So, if we choose to create our MyTable within a schema called Sample, we can adjust the annotation as follows:\nusing System; using System.ComponentModel.DataAnnotations.Schema; namespace MyEFProject.Classes { [Table(\u0026#34;MyTable\u0026#34;, Schema = \u0026#34;Sample\u0026#34;)] public class MyClass { //Column definitions go here... } } Pretty neat. But what happens if we need to reference a view as opposed to a table instead? I dealt with this requirement recently when building out a read-only OData endpoint using ASP.NET Core MVC and, despite my efforts, I could not locate a valid source to tell me that views are a supported. Well, after experimenting further, I can confirm that it IS possible to do this via the very same Table Annotation used earlier. So, if our SQL Server view is called vw_MyView and exists on the very same schema referenced earlier, here\u0026rsquo;s how our class would need to look:\nusing System; using System.ComponentModel.DataAnnotations.Schema; namespace MyEFProject.Classes { [Table(\u0026#34;vw_MyView\u0026#34;, Schema = \u0026#34;Sample\u0026#34;)] public class MyClass { //Column definitions go here... } } And, although I have not tested this myself, it should also be possible for you to then write or update data into the view, in much the same manner as we can write manual SQL queries to do just this.\nUsing Data Annotations within EF Core can significantly simplify the process for handling situations like the ones described in this post, without needing to write a whole bunch of specialised code instead. If you haven\u0026rsquo;t already, I will urge you to read through this informative Microsoft Docs article, which talks through in detail the types of things you can do with them. To summarise then, never treat Data Annotations as a one-trick pony. Regardless of whether you are getting EF Core to handle the creation of your database structure automatically or are referencing an existing database, there will be a whole host of options available here to help you along.\n","date":"2020-07-05T00:00:00Z","image":"/images/CSharpVS-FI.png","permalink":"/using-sql-server-views-within-entity-framework-core-code-first-data-annotations/","title":"Using SQL Server Views within Entity Framework Core (Code First Data Annotations)"},{"content":"Working with Azure Resource Manager (ARM) templates can sometimes feel like a pure grind. You could say the experience is akin to spending many hours wandering around a JRPG video game, killing enemies and hoping to get that super rare drop you\u0026rsquo;ve got your heart set on. No matter how much you try, your prize can often not surface until completing many hours of tedious, repetitive tasks. Whether we are looking at a real IT or fantasy world situation, the experience can feel startlingly similar. What can then compound IT issues even further is when you bring other tools into the equation, such as Azure Pipelines. In the case of ARM related deployments, they are usually happy bedfellows and, arguably, an incredibly potent combination in streamlining your cloud deployments and removing human intervention from the process. That is, of course, unless they start throwing pesky little errors like this:\nAt this point, it\u0026rsquo;s probably useful to step back for a few moments and explain the situation. I was attempting to use Azure Pipelines to deploy out a single Linux App Service and its corresponding .NET Core 3.1 web application. This process involves two-stages in Azure - we first must create an App Service Plan. Think of this as your physical web server, with a defined specification, and the one that costs you money each month. Next, we then create our Web Apps (or App Services). Depending on the pricing tier of our App Service Plan, we can have several of these hosted on a single App Service Plan. All of this can be created from the Azure Portal or in an Azure template like the one below:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: {}, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/serverfarms\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-02-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyLinuxAppServicePlan\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;UK South\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;B1\u0026#34; }, \u0026#34;kind\u0026#34;: \u0026#34;linux\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyDOTNETCoreApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;UK South\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, \u0026#39;MyLinuxAppServicePlan\u0026#39;)]\u0026#34; ], \u0026#34;Identity\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;SystemAssigned\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyDOTNETCoreApp\u0026#34;, \u0026#34;serverFarmId\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, \u0026#39;MyLinuxAppServicePlan\u0026#39;)]\u0026#34;, \u0026#34;siteConfig\u0026#34;: { \u0026#34;metadata\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;CURRENT_STACK\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;dotnetcore\u0026#34; } ] } }, \u0026#34;resources\u0026#34;: [] } ], \u0026#34;outputs\u0026#34;: {} } For this example, we are explicitly creating a Linux App Service Plan - primarily because its cheaper and .NET Core apps can run on it without issue - so, as such, we tag on two specific properties:\nFor the App Service Plan (Microsoft.Web/serverfarms), we populate the kind value with linux. Next, for our Web App (Microsoft.Web/sites), we tell the web app to expect a .NET Core app, via the CURRENT_STACK property. And this template does work from a deployment standpoint - try it yourself if you don\u0026rsquo;t believe me. The result is an apparently \u0026ldquo;working\u0026rdquo; Linux web app, that is ready to have its corresponding web application deployed out to using the appropriate DevOps task. Or not, because of the error message we get above. Attempting to do a local deployment of the application via Visual Studio was a no go too. The Linux Web App was not even visible when trying to select it from the correct Subscription.\nAfter performing painstaking research using tools so far undiscovered by the average IT professional, I stumbled across a Stack Overflow post, with a secondary answer that seemed to have some merit, particularly given that Visual Studio was having difficulty locating the web app. It turns out that, because of a missing property at the App Service Plan level, the resource had technically been created as a Windows App Service Plan, not Linux as the Azure Portal seemed to indicate. Thankfully, it\u0026rsquo;s an easy fix - just add on a new property called reserved to the App Service Plan and set it\u0026rsquo;s value to true. The updated template would, therefore, look like this:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: {}, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/serverfarms\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-02-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyLinuxAppServicePlan\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;UK South\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;B1\u0026#34; }, \u0026#34;kind\u0026#34;: \u0026#34;linux\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;reserved\u0026#34;: true } }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyDOTNETCoreApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;UK South\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, \u0026#39;MyLinuxAppServicePlan\u0026#39;)]\u0026#34; ], \u0026#34;Identity\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;SystemAssigned\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyDOTNETCoreApp\u0026#34;, \u0026#34;serverFarmId\u0026#34;: \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, \u0026#39;MyLinuxAppServicePlan\u0026#39;)]\u0026#34;, \u0026#34;siteConfig\u0026#34;: { \u0026#34;metadata\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;CURRENT_STACK\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;dotnetcore\u0026#34; } ] } }, \u0026#34;resources\u0026#34;: [] } ], \u0026#34;outputs\u0026#34;: {} } Running this updated template through your pipeline and then re-attempting the App Service update will cause it to complete without issue - hooray!\nPerhaps I have become jaded in my experience, but these types of bitty issues are, regrettably, commonplace when attempting to work with ARM templates. Often, merely copying and pasting an example or exporting an existing resource a template will be sufficient, and there will be a degree of fiddling needed to get things working nicely within your deployments. That\u0026rsquo;s not to say that I hate ARM templates with a burning passion, far from it. They are the best tool in your arsenal when embracing a truly DevOps culture within your organisation, with a prize - fully functioning, automated deployments - that, I think, more than exceeds that same rush after getting your rare drop in a JRPG 🙂\n","date":"2020-06-28T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/resolving-the-parameter-linuxfxversion-has-an-invalid-value-error-azure-devops/","title":"Resolving \"The parameter LinuxFxVersion has an invalid value\" Error (Azure DevOps)"},{"content":"Regular readers of the blog and Azure Data Factory aficionados may recall a post I did last year, where I discussed some of the limitations present as part of the Dynamics 365 / Common Data Service connector. The post focused its attention on the tools inability to map data into two specific field types - Owner and Customer. These differ from the norm of other fields within the Common Data Service database, as they are effectively multi-entity lookup fields. What does this mean in practice? That we can, for example, associate the Owner field to either a User or Team record in the system. The crux of the issue last year ultimately came down to the fact that we had no way of specifying the type of record to write to either of these fields. Hence, they were unsupported, and we had to resort to a bespoke solution to get around the problem.\nBearing all this in mind, I was therefore pleased to not only see the following recent comment from last years post\u0026hellip;\n\u0026hellip;but also the following email from the Azure User Voice site\u0026hellip;\nMicrosoft has, it seems, heard our voices and, as we can see on the Docs website to confirm, this capability is now possible within Azure Data Factory. A big thanks to Kevin and to everyone else who voted for this feature! What\u0026rsquo;s even better is that it\u0026rsquo;s effortless to start working with this new feature, which is why I thought I\u0026rsquo;d show you how in this post.\nTo set the scene, let\u0026rsquo;s assume we are importing data from a SQL Server database into the Common Data Service. Specifically, we have many Quote records we wish to import and, also, we want to associate these alongside existing Account records as part of the same import step, using the default customerid field on the Quote entity. And, as you may have guessed already, this field type is indeed that of Customer. We have a list of all the Account GUID values stored in our SQL Server database so that bit is easy - just a straight mapping configured on our Copy Data task:\nSo far, so good. Next, we need to tell the task which record type we are mapping into - in this case, the Account entity. There are two ways we can do this:\nAs helpfully suggested on the Microsoft Docs article referenced above, we can very straightforwardly add on an additional column to our dataset, with a hardcoded values like the one indicated below: Next, we then perform a mapping to a \u0026ldquo;phantom\u0026rdquo; field - one that doesn\u0026rsquo;t strictly exist on the database table, but is exposed by the applications Web API to allow callers to tell the Common Data Service database which entity the incoming record is from: For more complex scenarios (such as where you need to map to both Contact and Account record types), you will have to resort to logic within your SQL query to generate the appropriate value for each row. The query below shows how you could do this, for when you are linking across your Contact/Account GUID values from separate tables within your database. The query makes an assumption around the table structure and also on the existence of an absolute reference (i.e. the JOINs would not return a match from both the Account AND Contact tables), but for illustrative purposes, should help in adapting it to your specific needs: [snippet id=\u0026ldquo;969\u0026rdquo;] Pretty easy right? The same logic will also work with Owner fields now as well, thereby allowing you to associate User or Team ownerships to records within the Common Data Service.\nIt\u0026rsquo;s always great to see when a company addresses a significant deficiency in an IT product. This is even more true when the issue relates to something that end-users are providing vocal feedback about. This example does prove the importance of getting your Azure feedback and suggestions logged on the User Voice portal so that the product team can get all the info they need to improve things further. Because let\u0026rsquo;s face it - some of us may spend many hours of each day using and getting frustrated over the platform for no good reason, so anything that can alleviate this is a good thing. Now that Azure Data Factory has this capability enabled, it does focus additional attention on how effective the product can be as part of one-off data migrations and continuous integrations, a topic which I have been harping on about for some time now. If you\u0026rsquo;re doing serious work involving tools such as SSIS or others as part of your Dynamics 365 / Common Data Service migrations or integrations, now is the time to start paying attention to Azure Data Factory. Don\u0026rsquo;t say I didn\u0026rsquo;t warn you\u0026hellip; 😏\n","date":"2020-06-21T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/working-with-common-data-service-owner-customer-fields-in-azure-data-factory-v2/","title":"Working with Common Data Service Owner \u0026 Customer Fields in Azure Data Factory V2"},{"content":"Welcome to the tenth post in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. Last time around in the series, we took a deep-dive look into Power Apps Component Framework (PCF) controls and how they could be used to extend the user interface within Power Apps. PCF controls work great for situations where we cannot utilise JavaScript form functions, or a canvas Power App to provide a more bespoke, intuitive user experience with model-driven apps. As useful as these control types are, they do not allow us to modify the behaviour of the various ribbon buttons throughout the application, an example of which can be seen below for the Lead entity:\nIn most cases, these provide us with a range of useful functionality that is best left unmodified. However, there will be circumstances where you may need to add or remove ribbon buttons or tailor the behaviour of an existing button to perform a different action. All of these tasks fall neatly into the Create a command button function area of the MB-400 exam, where Microsoft expects candidates to demonstrate knowledge of the following topics:\nCreate a command button function\ncreate the command function design command button triggers, rules, and actions edit the command bar using the Ribbon Workbench modify the form JavaScript library dependencies By using the right tools, it is pretty straightforward to perform simple or even complex amends to ribbon command buttons. So let\u0026rsquo;s dive and see what\u0026rsquo;s involved!\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nRibbon Overview We\u0026rsquo;ve already touched upon what a command button is as part of this posts intro. Before we dive in any further, though, it is important first to explain them within the context of the ribbon. This feature has been a mainstay for the application over many years. Both then and now, it provides us with the means of changing how the various buttons within a model-driven app display and also operate when users interact with them. The application displays different ribbons within multiple areas, including:\nEntity record forms Views and Subgrids Specific areas that rely on the \u0026ldquo;classic\u0026rdquo; interface Within the Dynamics 365 for Outlook App Dynamics 365 utilises a single ribbon definition per entity, defined as an XML document within the application. You can see how a condensed example looks for the Account entity by default below; you can download the full definitions for all out of the box entities from the Microsoft Docs website:\n\u0026lt;RibbonDefinitions\u0026gt; \u0026lt;RibbonDefinition\u0026gt; \u0026lt;UI\u0026gt; \u0026lt;Ribbon\u0026gt; \u0026lt;Tabs Id=\u0026#34;Mscrm.Tabs\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MainTab\u0026#34; Title=\u0026#34;Accounts\u0026#34; Description=\u0026#34;Accounts\u0026#34; Sequence=\u0026#34;100\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;LargeMediumLargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeLargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Find.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Find\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp\u0026#34; Sequence=\u0026#34;61\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;LargeSmallLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;100\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;110\u0026#34; Size=\u0026#34;LargeMediumMediumLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;120\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;130\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Scale.3\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;140\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;150\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;160\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;170\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;10\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.FourOverflow\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Management.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewRecord\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.NewRecordFromGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewRecordForBPFEntity\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.NewRecordForBPFEntity\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/NewRecord_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Edit\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Edit\u0026#34; Command=\u0026#34;Mscrm.EditSelectedRecord\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Edit_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/edit32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Edit\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Activate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Activate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Activate\u0026#34; Sequence=\u0026#34;30\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Activate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Activate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Activate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Deactivate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Deactivate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Deactivate\u0026#34; Sequence=\u0026#34;40\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Deactivate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Deactivate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeActivate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.OpenActiveStage\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_Form_Other_MainTab_OpenActiveStage_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Form.Tooltip.OpenActiveStage\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.OpenActiveStage\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Form.MainTab.OpenActiveStage\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Form.MainTab.OpenActiveStage\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/formdesign16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/EditForm_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;FormDesign\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DeleteSplitButtonCommand\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Remove\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.DeleteMenu.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Delete\u0026#34; Command=\u0026#34;Mscrm.DeleteSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.BulkDelete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.BulkDelete\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BulkDelete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BulkDeleteWizard_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; Sequence=\u0026#34;59\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;MergeRecords\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Detect\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.DetectDuplicates\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupes\u0026#34; Sequence=\u0026#34;60\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectDuplicates_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DeleteSelected_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DeleteSelected_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_Selected_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_Selected_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Detect.All\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesAll\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectAll_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DetectAll_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_Management_Detect_All_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_HomepageGrid_EntityLogicalName_MainTab_Management_Detect_All_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.ChangeDataSetControlButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; ToolTipDescription=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Command=\u0026#34;Mscrm.ChangeControlCommand\u0026#34; Sequence=\u0026#34;25\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; Alt=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.ChangeControlCommand\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Alt=\u0026#34;$LocLabels:GuidedHelp.Alt\u0026#34; Command=\u0026#34;loadGuidedHelp\u0026#34; Description=\u0026#34;Learning Path\u0026#34; Id=\u0026#34;GuidedHelpaccount.Grid\u0026#34; LabelText=\u0026#34;$LocLabels:GuidedHelp.LabelText\u0026#34; Sequence=\u0026#34;70\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:GuidedHelp.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:GuidedHelp.ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Alt=\u0026#34;$LocLabels:LPLibrary.Alt\u0026#34; Command=\u0026#34;launchLPLibrary\u0026#34; Description=\u0026#34;Learning Path Library\u0026#34; Id=\u0026#34;LPLibraryaccount.Grid\u0026#34; LabelText=\u0026#34;$LocLabels:LPLibrary.LabelText\u0026#34; Sequence=\u0026#34;80\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:LPLibrary.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:LPLibrary.ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ModernClient\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;11\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ModernClient.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RefreshModernButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; Command=\u0026#34;Mscrm.Modern.refreshCommand\u0026#34; ModernCommandType=\u0026#34;ControlCommand\u0026#34; Sequence=\u0026#34;17\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; ModernImage=\u0026#34;Refresh\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NavigateToHomepageGrid\u0026#34; ToolTipTitle=\u0026#34;$Resources:OpenAllRecordsViewImageButtonText\u0026#34; ToolTipDescription=\u0026#34;$Resources:OpenAllRecordsViewImageButtonToolTip\u0026#34; Command=\u0026#34;Mscrm.NavigateToHomepageGrid\u0026#34; Sequence=\u0026#34;18\u0026#34; LabelText=\u0026#34;$Resources:OpenAllRecordsViewImageButtonText\u0026#34; ModernImage=\u0026#34;TableGroup\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ActionButtonForMSTeams\u0026#34; Command=\u0026#34;Mscrm.HomePageGrid.MSTeamsViewCollaborateCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:OfficeProductivity.MSTeamsToolTip\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:OfficeProductivity.MSTeamsToolTip\u0026#34; LabelText=\u0026#34;$LocLabels:OfficeProductivity.MSTeams\u0026#34; Alt=\u0026#34;$LocLabels:OfficeProductivity.MSTeams\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Sequence=\u0026#34;1028\u0026#34; ModernImage=\u0026#34;MSTeamsIcon\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Actions_32.png\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Actions.Controls\u0026#34;\u0026gt; \u0026lt;Button Sequence=\u0026#34;10\u0026#34; Id=\u0026#34;msdyn.HomepageGrid.account.LaunchPlaybook.Button\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;$webresource:Playbook/msdyn_/Images/SVG/PlaybookInstanceIcon.svg\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; Command=\u0026#34;Playbook.HomepageGrid.Launch\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.Form.LaunchPlaybook.Button.LabelText\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.ToolTip.LaunchPlabyook\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ViewOrgChart\u0026#34; Command=\u0026#34;LinkedInExtensions.ViewOrgChartForGrid\u0026#34; Sequence=\u0026#34;52\u0026#34; Alt=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart\u0026#34; LabelText=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Mscrm.Form.account.ViewOrgChart.ToolTipDesc\u0026#34; ModernImage=\u0026#34;Drilldown\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Collaborate\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Collaborate.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SendDirectEmail\u0026#34; Command=\u0026#34;Mscrm.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.modern.SendDirectEmail\u0026#34; Command=\u0026#34;Mscrm.modern.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddToList\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddToMarketingList\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;BulletListAdd\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Assign\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Assign\u0026#34; Command=\u0026#34;Mscrm.AssignSelectedRecord\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Assign\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Assign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Assign\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Sharing\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Share\u0026#34; Command=\u0026#34;Mscrm.ShareSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Share_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Sharing_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Share\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ViewHierarchy\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_MainTab_Actions_ViewHierarchy_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.ViewHierarchyForSelectedRecord\u0026#34; Sequence=\u0026#34;55\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.ViewHierarchy\u0026#34; Image16by16=\u0026#34;/_imgs/Hierarchy.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Hierarchy_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;ViewHierarchy\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Copy\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Copy_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected.EnabledInIEBrowser\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Copy_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Copy_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.Selected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Copy_Selected_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/copyshortcut16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/copyshortcut32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Copy.View\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CopyShortcut_View\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CopyView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CopyView_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Send\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_MainTab_ExportData_Send_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected.AlwaysEnabled\u0026#34; Sequence=\u0026#34;61\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Send.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Send.Selected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Send.View\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut_View\u0026#34; Command=\u0026#34;Mscrm.SendShortcutView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.View\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendView_32.png\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddConnection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddConnection_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;Connection\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnection.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnectionNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ModernImage=\u0026#34;ConnectionToOther\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddConnectionToMe\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionToMeGrid\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ModernImage=\u0026#34;ConnectionToMe\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AddToQueue\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Mscrm_HomepageGrid_EntityLogicalName_MainTab_Actions_AddToQueue_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToQueue\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddToQueue_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddToQueue_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;AddToQueue\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.FollowButton\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.FollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Follow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003.png\u0026#34; Sequence=\u0026#34;1000\u0026#34; ModernImage=\u0026#34;RatingEmpty\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.UnfollowButton\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.UnfollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Unfollow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003_u.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003_u.png\u0026#34; Sequence=\u0026#34;1020\u0026#34; ModernImage=\u0026#34;RatingFull\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;40\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.Workflow.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RunWorkflow\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.RunWorkflowSelected\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/StartWorkflow_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RunScript\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunScript\u0026#34; Command=\u0026#34;Mscrm.RunInteractiveWorkflowSelected\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/startdialog_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/startdialog_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar\u0026#34; Sequence=\u0026#34;60\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunFlow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunFlow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows.ManageRunFlow\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.Flows\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Flows\u0026#34; Sequence=\u0026#34;70\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.Flows\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.Flows.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.ManageFlows\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.ManageFlows\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.ManageFlows\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.ManageFlows\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.ManageFlows\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateStaticFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.RunFlow\u0026#34; Sequence=\u0026#34;20\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunFlow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunFlow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.RunFlow\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.RunFlow\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.Flows.RefreshCommandBar.RunWorkflow\u0026#34; Sequence=\u0026#34;30\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.Form.Flows.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/Ribbon/OpenFlows_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Ribbon/OpenFlows_32.png\u0026#34; LabelText=\u0026#34;$Resources:RefreshCommandBar.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:RefreshCommandBar.RunWorkflow\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.Grid.Flows.PopulateWorkFlowMenu\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Flows\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;50\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ExportData\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.ExportData.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.RunReport\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.RunReport\u0026#34; Command=\u0026#34;Mscrm.ReportMenu.Grid\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.ReportsMenu.Populate.Grid\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunReport_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Report\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.DocumentTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DocumentTemplate\u0026#34; Command=\u0026#34;Mscrm.DocumentTemplate.Templates\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DocumentTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;35\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DocumentTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsExcelTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;DocumentTemplates\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.account.WordTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.WordTemplate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.WordTemplate\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;false\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.HomepageGrid.WordTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;36\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/WordTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsWordTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;WordTemplates\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportToExcelOnline\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcelOnline\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.Online\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportToExcelOnline\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheetAll\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExportAll\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.AllStaticXlsx\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.StaticXlsx\u0026#34; Sequence=\u0026#34;42\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicWorkesheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.DynamicXlsx\u0026#34; Sequence=\u0026#34;43\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicPivotTable\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicPivotTable\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.PivotXlsx\u0026#34; Sequence=\u0026#34;44\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportSelectedToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportSelectedToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportSelectedToExcel\u0026#34; Sequence=\u0026#34;230\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel\u0026#34; Command=\u0026#34;Mscrm.ImportDataFromExcel\u0026#34; Sequence=\u0026#34;21\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ImportFromExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ImportFromExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportFromExcel\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ImportDataFromCSV\u0026#34; Command=\u0026#34;Mscrm.ImportDataFromCSV\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ImportFromCSV\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ImportFromCSV\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportFromCSV\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.HomepageGrid.account.Import\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ImportDataSplitButton\u0026#34; Command=\u0026#34;Mscrm.ImportDataSplitButton\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Import16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/importdata32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.account.Import.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.account.Import.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.Import.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ImportData\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_BasicHomeTab_Tools_ImportData_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.ImportData\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Jewel.ImportData\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ImportData_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/importdata32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.ExportTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.ExportDataTemplate\u0026#34; Command=\u0026#34;Mscrm.ExportDataTemplate\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportDataTemplate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ExportTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ExportTemplate_32.png\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.HomepageGrid.account.MainFilters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.AdvancedFind\u0026#34; Command=\u0026#34;Mscrm.OpenGridAdvancedFind\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.AdvancedFind.TooltipDescription\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Find.AdvancedFind\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AdvancedFind_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/advancedfind32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Meqf\u0026#34; Command=\u0026#34;Mscrm.OpenMultipleEntityQuickFindSearch\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; ToolTipTitle=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MultipleEntityQuickFind.TooltipDescription\u0026#34; Alt=\u0026#34;$Resources:Search_LaunchButton_Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/search_normal.gif\u0026#34; Image32by32=\u0026#34;/_imgs/search_normal.gif\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp\u0026#34; Command=\u0026#34;Mscrm.OutlookHelp\u0026#34; Sequence=\u0026#34;70\u0026#34; Title=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.MainTab.OutlookHelp.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.Help\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_Jewel_Help_Flyout_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.OutlookHelp\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Jewel.HelpMenu\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Help_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Help_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.account.View\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.View\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.View.TabName\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.View.TabName\u0026#34; Sequence=\u0026#34;110\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.account.View.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Scale.3\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.account.View.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid\u0026#34; Command=\u0026#34;Mscrm.FiltersGroup\u0026#34; Sequence=\u0026#34;11\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.View.Grid.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveAsDefaultGridView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Filters_SaveAsDefaultGridView_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveAsDefaultGridView\u0026#34; Command=\u0026#34;Mscrm.SaveAsDefaultGridView\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridViewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveViewAsDefault_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.CustomizePreviewPane\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.CustomizePreviewPane\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CustomizePreviewPane\u0026#34; Command=\u0026#34;Mscrm.CustomizePreviewPane\u0026#34; Sequence=\u0026#34;21\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.CustomizePreviewPane\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CustomPreviewPane_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CustomPreviewPane_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.HomepageGrid.account.ViewFilters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Data.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;23\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Data.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveToCurrent\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Filters_SaveToCurrent_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToCurrentView\u0026#34; Command=\u0026#34;Mscrm.SaveToCurrentView\u0026#34; Sequence=\u0026#34;27\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrent\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrentToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savefilters16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefilters32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.SaveAsNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNew\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToNewView\u0026#34; Command=\u0026#34;Mscrm.SaveAsNewView\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNew\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.SaveAsNewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveFiltersAsNewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefiltersasview32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.NewView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewViewTooltip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.HomepageGrid.View.Grid.NewViewTooltipDescription\u0026#34; Command=\u0026#34;Mscrm.NewPersonalView\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.NewView\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/NewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/NewView_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Refresh_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.account.View.Refresh.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.RefreshButton\u0026#34; Command=\u0026#34;Mscrm.RefreshGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.Refresh\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.View.Grid.Refresh\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Refresh16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Refresh_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Grid_Refresh_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_HomepageGrid_Other_View_Grid_Refresh_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/Tabs\u0026gt; \u0026lt;ContextualTabs Id=\u0026#34;Mscrm.ContextualTabs\u0026#34;\u0026gt; \u0026lt;ContextualGroup Id=\u0026#34;Mscrm.VisualizationTools\u0026#34; Command=\u0026#34;Mscrm.VisualizationTools.Command\u0026#34; Color=\u0026#34;Orange\u0026#34; ContextualGroupId=\u0026#34;Mscrm.VisualizationTools\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTools.FlareHeading\u0026#34; Sequence=\u0026#34;1000\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.Command\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.TabHeading\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;MediumMedium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Medium\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;70\u0026#34; Size=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Sequence=\u0026#34;90\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Popup\u0026#34; GroupId=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Sequence=\u0026#34;100\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Save_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.OneLargeTwoMedium\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Save\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.SaveChart\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Save.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Save_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.SaveAndClose\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.SaveAndCloseChart\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.SaveAndClose.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/FormEditorRibbon/SaveAndClose_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAndCloseChart_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Save.Copy\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.CopyChart\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Save.Copy.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveAsChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/saveaschart32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.ExpandChart\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ExpandChart\u0026#34; Sequence=\u0026#34;35\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Management.Expand.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ExpandChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/expandchart32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ChartsGroup\u0026#34; Sequence=\u0026#34;30\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.VisualizationDesigner.Charts\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.ColumnFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.ColumnFlyout\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.ColumnFlyout.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ColumnChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ColumnChart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.Column\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Column\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Column\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.StackedColumn\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedColumn\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Column.StackedColumn100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedColumn100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedColumn100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.BarFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.BarFlyout\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar.Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BarChart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BarChart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.Bar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Bar\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Bar\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.StackedBar\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedBar\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Bar.StackedBar100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedBar100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedBar100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.AreaFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.AreaFlyout\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area.Tooltip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/areaChart_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/areaChart_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.Area\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.Area\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Area\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.StackedArea\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedArea\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Area.StackedArea100\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100.Tooltip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.Charts.StackedArea100\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.StackedArea100\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Line\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.LineChart\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Line.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/linechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/linechart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Pie\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.PieChart\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Pie.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/piechart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/piechart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Charts.Other\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.FunnelChart\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Charts.Funnel.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/funnelchart16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/funnelchart32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.ChartsGroup\u0026#34; Sequence=\u0026#34;40\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/placeholders/ribbon_placeholder_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.ThreeLargeThreeMedium\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.TopFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopFlyout\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/topRules_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/topRules_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top3\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Top3\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Top5\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Top5\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Top5\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.TopX\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.TopX\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.TopX\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.BottomFlyout\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.BottomFlyout\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/bottomRules_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/visualization/bottomRules_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom.Controls0\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom3\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Bottom3\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Bottom5\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Bottom5\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Bottom5\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.BottomX\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.BottomX\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.BottomX\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.TopBottom.Clear\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear.ToolTip\u0026#34; Command=\u0026#34;Mscrm.VisualizationDesignerTab.TopBottom.Clear\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear\u0026#34; Alt=\u0026#34;$Resources:Ribbon.VisualizationTab.TopBottom.Clear.ToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/visualization/clearRules_16.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;70\u0026#34; Description=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Description\u0026#34; Title=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Title\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Close_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.AllEntities.VisualizationTab.Close.Close\u0026#34; Command=\u0026#34;Mscrm.VisualizationTab.CloseDesigner\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.Label\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.VisualizationTab.Close.Close.ToolTipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Close_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Close_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/ContextualGroup\u0026gt; \u0026lt;ContextualGroup Id=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Command=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Color=\u0026#34;LightBlue\u0026#34; ContextualGroupId=\u0026#34;Mscrm.SubGrid.account.ContextualTabs\u0026#34; Title=\u0026#34;$Resources:Ribbon.SubGridFlare\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Tab Id=\u0026#34;Mscrm.SubGrid.account.MainTab\u0026#34; Command=\u0026#34;Mscrm.SubGrid.account.MainTab\u0026#34; Title=\u0026#34;Accounts\u0026#34; Description=\u0026#34;Account\u0026#34; Sequence=\u0026#34;10\u0026#34;\u0026gt; \u0026lt;Scaling Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Scaling\u0026#34;\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;10\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;20\u0026#34; Size=\u0026#34;LargeMediumLargeLarge\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;30\u0026#34; Size=\u0026#34;LargeMediumLargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;40\u0026#34; Size=\u0026#34;LargeMedium\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Sequence=\u0026#34;50\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;60\u0026#34; Size=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;MaxSize Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.MaxSize\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;80\u0026#34; Size=\u0026#34;LargeMediumLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;90\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;110\u0026#34; Size=\u0026#34;LargeSmallLarge\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;120\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;130\u0026#34; Size=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;140\u0026#34; Size=\u0026#34;LargeSmallLargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Sequence=\u0026#34;150\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Sequence=\u0026#34;170\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Sequence=\u0026#34;180\u0026#34; Size=\u0026#34;Popup\u0026#34; PopupSize=\u0026#34;LargeSmall\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Sequence=\u0026#34;190\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Sequence=\u0026#34;200\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.Scale.1\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Sequence=\u0026#34;210\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;Scale Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Scale.2\u0026#34; GroupId=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Sequence=\u0026#34;220\u0026#34; Size=\u0026#34;Popup\u0026#34; /\u0026gt; \u0026lt;/Scaling\u0026gt; \u0026lt;Groups Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Groups\u0026#34;\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;10\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Description=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Management.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;MailApp.SubGrid.SetRegarding.account.Button\u0026#34; Command=\u0026#34;MailApp.SubGrid.SetRegardingCommand\u0026#34; Sequence=\u0026#34;1\u0026#34; LabelText=\u0026#34;$LocLabels:MailApp.SubGrid.SetRegarding.Button.Label\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:MailApp.SubGrid.SetRegarding.Button.ToolTip\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;LinkArticle\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.NewRecord\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.New\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.New\u0026#34; Command=\u0026#34;Mscrm.NewRecordFromGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.New\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddListMember\u0026#34; Command=\u0026#34;Mscrm.AddMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.AddListMember\u0026#34; ModernImage=\u0026#34;BulletListAdd\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.OpenAssociatedGridViewStandard\u0026#34; Command=\u0026#34;Mscrm.OpenAssociatedGridViewOnLiteGridStandard\u0026#34; Sequence=\u0026#34;15\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.OpenAssociatedGridView\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.OpenAssociatedGridView\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/OpenAssociatedGridView16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/OpenAssociatedGridView32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_OpenAssociatedGridViewStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_OpenAssociatedGridViewStandard_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddNewStandard\u0026#34; Command=\u0026#34;Mscrm.AddNewRecordFromSubGridStandard\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddNew\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddNew\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/New_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/newrecord32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddNewStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddNewStandard_ToolTipDescription\u0026#34; ModernImage=\u0026#34;New\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddExistingStandard\u0026#34; Command=\u0026#34;Mscrm.AddExistingRecordFromSubGridStandard\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddExistingStandard_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddExistingStandard_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingStandard_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingStandard_ToolTipDescription\u0026#34; ModernImage=\u0026#34;AddExisting\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddExistingAssoc\u0026#34; Command=\u0026#34;Mscrm.AddExistingRecordFromSubGridAssociated\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Alt=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.AddExisting\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddExistingStandard_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddExistingStandard_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingAssoc_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_AddExistingAssoc_ToolTipDescription\u0026#34; ModernImage=\u0026#34;AddExisting\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Edit\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.Edit\u0026#34; Command=\u0026#34;Mscrm.EditSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Edit\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Edit_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/edit32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Edit\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Activate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Activate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Activate\u0026#34; Sequence=\u0026#34;60\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Activate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Activate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Activate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Activate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Deactivate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Deactivate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.Deactivate\u0026#34; Sequence=\u0026#34;70\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Status.Deactivate\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Deactivate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Deactivate_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeActivate\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Delete\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_Delete_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.HomepageGrid.Tooltip.Delete\u0026#34; Command=\u0026#34;Mscrm.DeleteSelectedRecord\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources(EntityDisplayName):Ribbon.SubGrid.MainTab.Management.Delete\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.Delete\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Remove\u0026#34; Command=\u0026#34;Mscrm.RemoveSelectedRecord\u0026#34; Sequence=\u0026#34;90\u0026#34; LabelText=\u0026#34;$Resources:MenuItem_Label_Remove\u0026#34; Alt=\u0026#34;$Resources:MenuItem_Label_Remove\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Remove_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Remove_ToolTipDescription\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.BulkDelete\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.BulkDelete\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Management.BulkDelete.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/BulkDelete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/BulkDelete_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;DeleteBulk\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.MergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34; Sequence=\u0026#34;109\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;MergeRecords\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.Detect\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.DetectDuplicates\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupes\u0026#34; Sequence=\u0026#34;110\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DuplicateDetection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.Detect.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.Detect.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.Detect.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Detect.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesSelected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.Selected\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SelectedRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DuplicateDetection_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_Selected_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_Selected_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Detect.All\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.DetectDupesAll\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Dupe.Detect.All\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DetectAll_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/DetectAll_32.png\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Management_Detect_All_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Management_Detect_All_ToolTipDescription\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.ChangeDataSetControlButton\u0026#34; ToolTipTitle=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; ToolTipDescription=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Command=\u0026#34;Mscrm.ChangeControlCommand\u0026#34; Sequence=\u0026#34;25\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.ChangeControl\u0026#34; Alt=\u0026#34;$Resources:WebClient.Commands.ChangeControl.Description\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SendView_16.png\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DynamicMenu.ChangeControlCommand\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ModernClient\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;11\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ModernClient.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RefreshButton\u0026#34; Command=\u0026#34;Mscrm.Modern.refreshCommand\u0026#34; ModernCommandType=\u0026#34;ControlCommand\u0026#34; Sequence=\u0026#34;17\u0026#34; LabelText=\u0026#34;$Resources:MobileClient.Commands.Refresh\u0026#34; ModernImage=\u0026#34;Refresh\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;20\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Actions_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Actions.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CreateOpportunityForMembers\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers.ToolTip\u0026#34; Command=\u0026#34;Mscrm.CreateOpportunityForMembers\u0026#34; Sequence=\u0026#34;70\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.Account.CreateOpportunityForMembers\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/SFA/CreateOpportunityForMembers_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/SFA/CreateOpportunityForMembers_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;OpportunitiesList\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;30\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Collaborate\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible4\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Collaborate.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddEmail\u0026#34; Command=\u0026#34;Mscrm.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.modern.AddEmail\u0026#34; Command=\u0026#34;Mscrm.modern.AddEmailToSelectedRecord\u0026#34; Sequence=\u0026#34;10\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail.ToolTip\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.DirectEmail\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.SendDirectEmail\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddEmail_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Email_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddToList\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.AddToMarketingList\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.HomepageGrid.account.Add.AddToList\u0026#34; Image16by16=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_16.png\u0026#34; Image32by32=\u0026#34;$webresource:Marketing/_images/ribbon/AddToMarketingList_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CopyListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.CopyListMember\u0026#34; Command=\u0026#34;Mscrm.CopyListMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.CopyListMember\u0026#34; ModernImage=\u0026#34;AddMembers\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RemoveListMember\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.RemoveListMember\u0026#34; Command=\u0026#34;Mscrm.RemoveMembers\u0026#34; Sequence=\u0026#34;11\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveListMember\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AdvMergeRecords\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Merge\u0026#34; Command=\u0026#34;Mscrm.HideAdvMergeRecords\u0026#34; Sequence=\u0026#34;12\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.Record.Merge.MergeRecords\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/MergeRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/MergeRecords_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.QuickCampaign\u0026#34; Sequence=\u0026#34;12\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/QuickCampaign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/QuickCampaign_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.LabelText\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.ToolTip.Description\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;CreateQuickCampaign\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.Selected\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.Selected\u0026#34; Sequence=\u0026#34;10\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SelectedRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SelectedRecords_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.Selected.ToolTip.Description\u0026#34; ModernImage=\u0026#34;MultiSelect\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.AllCurrentPage\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.AllCurrentPage\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AllRecords_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AllRecords_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllCurrentPage.ToolTip.Description\u0026#34; ModernImage=\u0026#34;Letter\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.QuickCampaign.AllAllPages\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.ACL.QuickCampaign.AllAllPages\u0026#34; Sequence=\u0026#34;30\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.LabelText\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.LabelText\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AllRecordsAllPages_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AllRecordsAllPages_32.png\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.ToolTip.Title\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:Ribbon.QuickCampaign.AllAllPages.ToolTip.Description\u0026#34; ModernImage=\u0026#34;BrowseCards\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AssociateParentChildCase\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.AssociateParentChildCase\u0026#34; Command=\u0026#34;Mscrm.AssociateParentChildCase\u0026#34; Sequence=\u0026#34;13\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Form.incident.MainTab.Actions.AssociateParentChildCase\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AssociateChildCase_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AssociateChildCase_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.MailMerge\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.MailMerge\u0026#34; Command=\u0026#34;Mscrm.MailMergeSelected\u0026#34; Sequence=\u0026#34;20\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.account.MainTab.Actions.MailMerge\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/mailmerge16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/mailmerge32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.SubGrid.account.AddConnection\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.Splitbutton.AddConnection.Label\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddConnection_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddConnection_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;Connection\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.AddConnection.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddConnectionNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionGrid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionNew.Label\u0026#34; ModernImage=\u0026#34;ConnectionToOther\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddConnectionToMe\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Tooltip\u0026#34; Command=\u0026#34;Mscrm.AddConnectionToMeGrid\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; Alt=\u0026#34;$Resources:Ribbon.Connection.AddConnectionToMe.Label\u0026#34; ModernImage=\u0026#34;ConnectionToMe\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.AddToQueue\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Mscrm_SubGrid_EntityLogicalName_MainTab_Actions_AddToQueue_ToolTipDescription\u0026#34; Command=\u0026#34;Mscrm.AddSelectedToQueue\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.AddToQueue\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/AddToQueue_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/AddToQueue_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;AddToQueue\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Assign\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.MainTab.Actions.Assign\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Assign\u0026#34; Command=\u0026#34;Mscrm.AssignSelectedRecord\u0026#34; Sequence=\u0026#34;50\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.MainTab.Actions.Assign\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Assign_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Assign_32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;Assign\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.Sharing\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.Tooltip.Share\u0026#34; Command=\u0026#34;Mscrm.ShareSelectedRecord\u0026#34; Sequence=\u0026#34;60\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.Actions.Sharing\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Share_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Sharing_32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; ModernImage=\u0026#34;Share\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.CopySelected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.CopyShortcut\u0026#34; Command=\u0026#34;Mscrm.CopyShortcutSelected\u0026#34; Sequence=\u0026#34;70\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Copy\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/copyshortcut16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/copyshortcut32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SendSelected\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SendShortcut\u0026#34; Command=\u0026#34;Mscrm.SendShortcutSelected\u0026#34; Sequence=\u0026#34;80\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Record.Shortcut.Send\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/EmailLink_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SendShortcut_32.png\u0026#34; TemplateAlias=\u0026#34;o4\u0026#34; ModernImage=\u0026#34;EmailLink\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ToolTipDescription=\u0026#34;$LocLabels(EntityDisplayName):Ribbon.Tooltip.RemoveSelectedRecordsFromEntity\u0026#34; Command=\u0026#34;Mscrm.RemoveSelectedRecordsFromEntity\u0026#34; Sequence=\u0026#34;90\u0026#34; Alt=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; LabelText=\u0026#34;$LocLabels:Ribbon.SubGrid.account.RemoveSelectedRecordsFromEntity\u0026#34; ModernImage=\u0026#34;Remove\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Delete_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/Workplace/Remove_32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.FollowButton\u0026#34; Command=\u0026#34;Mscrm.SubGrid.FollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Follow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Follow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003.png\u0026#34; Sequence=\u0026#34;1010\u0026#34; ModernImage=\u0026#34;RatingEmpty\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.UnfollowButton\u0026#34; Command=\u0026#34;Mscrm.SubGrid.UnfollowCommand\u0026#34; ToolTipTitle=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$LocLabels:ActivityFeed.Unfollow.ToolTipDescription\u0026#34; LabelText=\u0026#34;$LocLabels:ActivityFeed.Unfollow.LabelText\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/Entity16_8003_u.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/Entity32_8003_u.png\u0026#34; Sequence=\u0026#34;1030\u0026#34; ModernImage=\u0026#34;RatingFull\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters\u0026#34; Command=\u0026#34;Mscrm.FiltersGroup\u0026#34; Sequence=\u0026#34;40\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible2\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Filters.Controls\u0026#34;\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.Filters\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Filters\u0026#34; Command=\u0026#34;Mscrm.Filters\u0026#34; QueryCommand=\u0026#34;Mscrm.Filters.Query\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.Filters\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.FiltersToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/filter16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/filter32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveToCurrent\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Filters_SaveToCurrent_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToCurrentView\u0026#34; Command=\u0026#34;Mscrm.SaveToCurrentView\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrent\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveToCurrentToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/savefilters16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefilters32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveAsNew\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNew\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveFiltersToNewView\u0026#34; Command=\u0026#34;Mscrm.SaveAsNewView\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNew\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsNewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveFilterAsNewView_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/savefiltersasview32.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;50\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ViewGroup\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Layout.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.SaveAsDefaultGridView\u0026#34; ToolTipTitle=\u0026#34;$Resources:Mscrm_SubGrid_Other_MainTab_Filters_SaveAsDefaultGridView_ToolTipTitle\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.SaveAsDefaultGridView\u0026#34; Command=\u0026#34;Mscrm.SaveAsDefaultGridView\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridView\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Filters.SaveAsDefaultGridViewToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/SaveViewAsDefault_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/setasdefaultview32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.Charts\u0026#34; Command=\u0026#34;Mscrm.Charts.Flyout\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.Charts\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChartsToolTip\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/ChartsBarGraph_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/ChartsBarGraph_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.Charts.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.Charts.MenuSection0\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.Charts.Controls0\u0026#34;\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.ChangeLayout.LeftRight\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ChangeLayout\u0026#34; Command=\u0026#34;Mscrm.Charts\u0026#34; QueryCommand=\u0026#34;Mscrm.Charts.Query\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts.LeftRight\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayoutToolTip\u0026#34; /\u0026gt; \u0026lt;ToggleButton Id=\u0026#34;Mscrm.SubGrid.account.ChangeLayout.Off\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayout\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ChangeLayout\u0026#34; Command=\u0026#34;Mscrm.Charts.Off\u0026#34; QueryCommand=\u0026#34;Mscrm.Charts.Query.Off\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.Charts.Off\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Visuals.ChangeLayoutToolTip\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/FlyoutAnchor\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;70\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.Workflow.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RunWorkflow\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunWorkflow\u0026#34; Command=\u0026#34;Mscrm.RunWorkflowSelected\u0026#34; Sequence=\u0026#34;30\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Workflow.RunWorkflow\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunWorkflow_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runworkflow32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.RunScript\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; ToolTipDescription=\u0026#34;$Resources(EntityDisplayName):Ribbon.Tooltip.RunScript\u0026#34; Command=\u0026#34;Mscrm.RunInteractiveWorkflowSelected\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.InteractiveWorkflow.RunScript\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/StartDialog_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/StartDialog_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;80\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.MainTab.ExportData\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.MainTab.ExportData.Controls\u0026#34;\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.RunReport\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.RunReport\u0026#34; Command=\u0026#34;Mscrm.ReportMenu.Grid\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.ReportsMenu.Populate.Grid\u0026#34; Sequence=\u0026#34;10\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Report.RunReport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/RunReport_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;Report\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.DocumentTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DocumentTemplate\u0026#34; Command=\u0026#34;Mscrm.DocumentTemplate.Templates\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.DocumentTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;15\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.DocumentTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/DocumentTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsExcelTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;DocumentTemplates\u0026#34; /\u0026gt; \u0026lt;FlyoutAnchor Id=\u0026#34;Mscrm.SubGrid.account.WordTemplate\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.WordTemplate\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.WordTemplate\u0026#34; PopulateDynamically=\u0026#34;true\u0026#34; PopulateOnlyOnce=\u0026#34;true\u0026#34; PopulateQueryCommand=\u0026#34;Mscrm.HomepageGrid.WordTemplate.Populate.Flyout\u0026#34; Sequence=\u0026#34;16\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.WordTemplate.Templates\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/WordTemplate_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/SaveAsWordTemplate_32.png\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; ModernImage=\u0026#34;WordTemplates\u0026#34; /\u0026gt; \u0026lt;SplitButton Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel\u0026#34; Sequence=\u0026#34;20\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34;\u0026gt; \u0026lt;Menu Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.Menu\u0026#34;\u0026gt; \u0026lt;MenuSection Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.MenuSection\u0026#34; Sequence=\u0026#34;10\u0026#34; DisplayMode=\u0026#34;Menu16\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcel.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.ExportToExcelOnline\u0026#34; ToolTipTitle=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcelOnline\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportToExcelOnline\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.Online\u0026#34; Sequence=\u0026#34;40\u0026#34; LabelText=\u0026#34;$Resources(EntityPluralDisplayName):Ribbon.SubGrid.Data.Export.ExportToExcelOnline\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheetAll\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExportAll\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.AllStaticXlsx\u0026#34; Sequence=\u0026#34;41\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExportAll\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.StaticWorksheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.StaticExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.StaticXlsx\u0026#34; Sequence=\u0026#34;42\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.StaticExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicWorkesheet\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicExcelExport\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.DynamicXlsx\u0026#34; Sequence=\u0026#34;43\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicExcelExport\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.HomepageGrid.account.DynamicPivotTable\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.DynamicPivotTable\u0026#34; Command=\u0026#34;Mscrm.ExportToExcel.PivotXlsx\u0026#34; Sequence=\u0026#34;44\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.DynamicPivotTable\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/MenuSection\u0026gt; \u0026lt;/Menu\u0026gt; \u0026lt;/SplitButton\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.ExportSelectedToExcel\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.Tooltip.ExportSelectedToExcel\u0026#34; Command=\u0026#34;Mscrm.ExportSelectedToExcel\u0026#34; Sequence=\u0026#34;230\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Alt=\u0026#34;$Resources:Ribbon.HomepageGrid.Data.Export.ExportSelectedToExcel\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/exporttoexcel16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/exporttoexcel32.png\u0026#34; TemplateAlias=\u0026#34;o3\u0026#34; ModernImage=\u0026#34;ExportToExcel\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;Group Id=\u0026#34;Mscrm.SubGrid.account.MainTab.FolderTracking\u0026#34; Command=\u0026#34;Mscrm.Enabled\u0026#34; Sequence=\u0026#34;80\u0026#34; Title=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; Image32by32Popup=\u0026#34;/_imgs/ribbon/runreport32.png\u0026#34; Template=\u0026#34;Mscrm.Templates.Flexible3\u0026#34;\u0026gt; \u0026lt;Controls Id=\u0026#34;Mscrm.SubGrid.account.FolderTracking.Controls\u0026#34;\u0026gt; \u0026lt;Button Id=\u0026#34;Mscrm.SubGrid.account.FolderTracking\u0026#34; Command=\u0026#34;Mscrm.HomepageGrid.FolderTracking\u0026#34; Sequence=\u0026#34;100\u0026#34; LabelText=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; ToolTipTitle=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking\u0026#34; ToolTipDescription=\u0026#34;$Resources:Ribbon.HomepageGrid.FolderTracking.TooltipDescription\u0026#34; Image16by16=\u0026#34;/_imgs/ribbon/CRM_Activity_Command_FolderTracking_16.png\u0026#34; Image32by32=\u0026#34;/_imgs/ribbon/CRM_Activity_Command_FolderTracking_16.png\u0026#34; TemplateAlias=\u0026#34;o2\u0026#34; ModernImage=\u0026#34;FolderTrack\u0026#34; /\u0026gt; \u0026lt;/Controls\u0026gt; \u0026lt;/Group\u0026gt; \u0026lt;/Groups\u0026gt; \u0026lt;/Tab\u0026gt; \u0026lt;/ContextualGroup\u0026gt; \u0026lt;/ContextualTabs\u0026gt; \u0026lt;/Ribbon\u0026gt; \u0026lt;/UI\u0026gt; \u0026lt;Templates\u0026gt; \u0026lt;RibbonTemplates Id=\u0026#34;Mscrm.RibbonTemplates\u0026#34;\u0026gt; \u0026lt;GroupTemplate Id=\u0026#34;Mscrm.Templates.3\u0026#34;\u0026gt; \u0026lt;Layout Title=\u0026#34;Large\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;OneRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;OneRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Medium\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Medium\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Small\u0026#34;\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;o1\u0026#34; DisplayMode=\u0026#34;Small\u0026#34; /\u0026gt; \u0026lt;OverflowSection Type=\u0026#34;ThreeRow\u0026#34; TemplateAlias=\u0026#34;isv\u0026#34; DisplayMode=\u0026#34;Small\u0026#34; /\u0026gt; \u0026lt;/Layout\u0026gt; \u0026lt;Layout Title=\u0026#34;Popup\u0026#34; LayoutTitle=\u0026#34;Large\u0026#34; /\u0026gt; \u0026lt;/GroupTemplate\u0026gt; \u0026lt;/RibbonTemplates\u0026gt; \u0026lt;/Templates\u0026gt; \u0026lt;CommandDefinitions\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;Mscrm.HomepageGrid.account.MergeRecords\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountOneOrTwo\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.VisualizationPaneNotMaximized\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.ShowOnNonModernAndModernIfAllowed\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HomepageGrid.account.MergeGroup\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.CanWriteAccount\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HybridDialogMergeEnabled\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;XrmCore.Commands.Merge.mergeRecords\u0026#34; Library=\u0026#34;$webresource:Main_system_library.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControl\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedEntityTypeName\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountAtLeastOne\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.VisualizationPaneNotMaximized\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotAListForm\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.AddSelectedToMarketingList\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;Marketing.CommandActions.Instance.addToList\u0026#34; Library=\u0026#34;$webresource:Marketing/CommandActions/Marketing_CommandActions.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControl\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedEntityTypeCode\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;CommandDefinition Id=\u0026#34;LinkedInExtensions.ViewOrgChartForGrid\u0026#34;\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.SelectionCountExactlyOne\u0026#34; /\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.HideOnMobile\u0026#34; /\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.ShowOnlyOnModern\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.NotOffline\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.IsOrgChartFeatureEnabled\u0026#34; /\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.CanReadContact\u0026#34; /\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;Actions\u0026gt; \u0026lt;JavaScriptFunction FunctionName=\u0026#34;LinkedInExtensions.Account.Instance.ViewOrgChartFromGrid\u0026#34; Library=\u0026#34;$webresource:LinkedInExtensions/Account/LinkedInExtensions_Account.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;SelectedControlSelectedItemReferences\u0026#34; /\u0026gt; \u0026lt;/JavaScriptFunction\u0026gt; \u0026lt;/Actions\u0026gt; \u0026lt;/CommandDefinition\u0026gt; \u0026lt;/CommandDefinitions\u0026gt; \u0026lt;RuleDefinitions\u0026gt; \u0026lt;DisplayRules\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.HomepageGrid.account.MergeGroup\u0026#34;\u0026gt; \u0026lt;MiscellaneousPrivilegeRule PrivilegeName=\u0026#34;Merge\u0026#34; /\u0026gt; \u0026lt;/DisplayRule\u0026gt; \u0026lt;DisplayRule Id=\u0026#34;Mscrm.PrimaryEntityHasCampaignResponse\u0026#34;\u0026gt; \u0026lt;OrRule\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;account\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;contact\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;lead\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;incident\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;opportunity\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;quote\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;invoice\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;salesorder\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;Or\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;contract\u0026#34; /\u0026gt; \u0026lt;/Or\u0026gt; \u0026lt;/OrRule\u0026gt; \u0026lt;/DisplayRule\u0026gt; \u0026lt;/DisplayRules\u0026gt; \u0026lt;EnableRules\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.NotOffline\u0026#34;\u0026gt; \u0026lt;CrmOfflineAccessStateRule State=\u0026#34;Offline\u0026#34; InvertResult=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/EnableRule\u0026gt; \u0026lt;EnableRule Id=\u0026#34;Mscrm.Form.{!EntityLogicalName}.Developer\u0026#34;\u0026gt; \u0026lt;EntityRule AppliesTo=\u0026#34;PrimaryEntity\u0026#34; EntityName=\u0026#34;{!EntityLogicalName}\u0026#34; /\u0026gt; \u0026lt;CustomRule FunctionName=\u0026#34;Mscrm.RibbonActions.formPageDeveloperTabEnableRule\u0026#34; Library=\u0026#34;/_static/_common/scripts/RibbonActions.js\u0026#34;\u0026gt; \u0026lt;CrmParameter Value=\u0026#34;PrimaryControl\u0026#34; /\u0026gt; \u0026lt;/CustomRule\u0026gt; \u0026lt;/EnableRule\u0026gt; \u0026lt;/EnableRules\u0026gt; \u0026lt;/RuleDefinitions\u0026gt; \u0026lt;/RibbonDefinition\u0026gt; \u0026lt;/RibbonDefinitions\u0026gt; When developers modify the ribbon for an entity, changes are applied over the default ribbon(s), and will typically sit underneath the CustomAction or HideCustomAction nodes. Developers must determine which area of the application they wish to modify by ensuring they target their changes to one of the areas highlighted above. To help make this determination, refer to the details on this article.\nOnce modified, Ribbon changes will then take effect at the entity level and be carried forward as part of any corresponding solution updates you make. Therefore, from a deployment standpoint, including your modified entities within your solutions will be sufficient to roll out your ribbon changes to other environments.\nUnderstanding Command Buttons, Rules and Actions Within our Ribbon definitions sit the various command buttons that get rendered to end-users in the application. Model-driven apps include many of these by default, covering typical actions we want to occur against a record - such as saving, reassigning it or deleting it from the system. Many of these command buttons will be contextual and may only be visible if the user has specific privileges granted to them within the application. This behaviour can help in keeping the interface relevant and de-cluttered. As developers, we will typically work with the out of the box command buttons in two contexts, namely when we wish to:\nToggle the visibility of a specific button. Override or replace a default command button action. For all other situations, we can then look to set up our custom command buttons. We have various options available here to tailor how this looks within the application - such as the title, display label across multiple languages, and its image. Once defined, we can then start to customise our new button further. There are two core concepts to grasp concerning this, so it is useful to understand (in detail) how they behave - namely, rules and actions.\nRules dictate the \u0026ldquo;state\u0026rdquo; of a button within the application. They can be used to define two specific action types - whether a command button is enabled or whether the rule is visible to users. For both types, you can determine their behaviour based on:\nWhether the user is accessing the ribbon via the Unified / Classic interface, the tablet app or via legacy areas of the application. Whether the user is accessing the application via a desktop browser or through the Outlook client. You can also determine their behaviour based on whether the Outlook client is in offline mode or if a user is working with a specific version of the Outlook client. The name of the entity the user is currently working with. The state of the loaded form, e.g. Create, Read-Only etc. The user\u0026rsquo;s current entity privileges within the application. When working with subgrids, the number of records currently selected by the user. A value that is present on the currently loaded entity form. The contents of the current URL the user has navigated to. For example, based on the above, you could modify a button in the application to only become visible to users who have the privilege to Read or Update a related entity record. These set of rules will likely cover most of your requirements when changing the state and visibility of buttons within the application. For more complex needs, consider defining a Custom Rule using a JavaScript function instead.\nFinally, actions perform the desired behaviour for a command button during its OnClick event within the application. As part of this, developers can choose to either execute bespoke logic via a JavaScript function or open a specific URL. This second option is great for when you need to link to an external application system and, because this action type supports custom parameters, also allows you to build a dynamic URL as part of this. You can find further detail on the supported kinds of parameters on the Microsoft Docs website; but, to summarise, developers can use a range of different data types and even feed in values from the current form or view the user is accessing. And, as mentioned, actions can be added onto default command buttons within the application, to modify its default behaviour. However, it is generally considered best practice to create a new command button and define a custom action for it that way.\nRibbon Workbench Overview Because model-driven apps store ribbon definitions as XML files, it\u0026rsquo;s difficult for developers to efficiently work with them and - most importantly - get a visual indication of how they will look post-deployment. As usual in these situations, the great Business Applications community comes to the rescue to help us along. In particular, we can thank Scott Durow for providing the fantastic Ribbon Workbench tool that does exactly what it says on the tin - lets us easily make changes to multiple entity ribbons within an easy to use interface. And, because Microsoft mentions this tool as part of the skills measured for this exam, it provides us with an excellent excuse to discuss it in detail. 🙂\nTo download the tool, first, you need to grab a copy of the XrmToolBox, as the Ribbon Workbench is available as a plug-in within the ToolBox. You can also download it as a managed solution from Scott\u0026rsquo;s website, but the XrmToolBox version tends to be better to work with I find. When running the tool for the first time, you will be prompted to select the solution that you wish to work with, as indicated below:\nA good tip when working with the Ribbon Workbench is to set up a slimmed-down or temporary solution within your Dynamics 365 / Common Data Service, containing just the entities whose ribbon you need to modify. Doing this will help to speed up the export/import of your ribbon definitions as you make changes to them.\nWith your solution chosen, we are then greeted with a screen resembling the below:\nIt\u0026rsquo;s worth quickly explaining what each of the respective areas numbered above does in detail:\nWithin the toolbox, you can quickly add on numerous different button types onto the ribbon. For example, you can add on a button that expands out into a selection, containing multiple sub-options.\nWithin this area, you can view all of the bespoke ribbon customisations you have performed, the underlying XML definition for your changes and any warning/error messages relating to your changes.\nEach of the ribbon components you have customised will appear and can be expanded within the list here. We can also add new Commands, Display Rules and Enable Rules from this area.\nThis section shows all the ribbon buttons that appear by default in the Home area of the application. Each button can be selected to view its properties or right-clicked instead to copy or customise it.\nThis section shows all the ribbon buttons that appear by default when viewing multiple records via a subgrid or view. Each button can be selected to view its properties or right-clicked instead to copy or customise it.\nThis section shows all the ribbon buttons that appear by default when viewing a single record at the form level. Each button can be selected to view its properties or right-clicked instead to copy or customise it.\nThe properties area displays details about the currently selected component. Clicking any existing button will populate the properties area of the window, thereby allowing you to view and amend it accordingly. You can see an example of how the properties for the Account Save button looks below:\nOnce you have finished making changes, you can click the Publish button to import these into the application. Note that this can take several minutes to complete, as the workbench performs a full solution import as part of this.\nFor the components highlighted in 4, 5 and 6, also note that you have an option toggle whether to apply your changes within the unified (UI) or classic interface. In most cases moving forward, UI should be the option you go for.\nHaving the Ribbon Workbench to hand whenever you need to perform the simplest of ribbon customisations is an absolute necessity. The tool will help to streamline your development process, and can often highlight issues that would be impossible to spot if you were modifying the raw XML definitions manually. Spending some time to understand how it works and, more crucially, how you can deploy simple ribbon customisations will be essential when tackling the MB-400 exam.\nUsage Cases for Ribbon Customisations Developers should always prepare themselves to identify situations where ribbon customisations will be required to meet a particular requirement within Dynamics 365 / Power Apps. To help you grasp this, I\u0026rsquo;ve highlighted below a couple of scenarios where this may be applicable:\nIntegrate with an external application system by allowing uses to open a new tab/window from directly within a model-driven app. We could then extend this further, by also populating the URL in question with information from the record itself to, for example, open a specific record in another system. Perform multiple, complex Web API or form-level operations via a single button press. Customise the behaviour of the Qualify Lead button, by replacing it with custom logic that creates an Opportunity record only, and not a new Account and Contact record. Hide or prevent users from performing specific actions, such as running on-demand workflows, changing Business Process Flows or activating records that have already been closed. As always when building solutions on top of Dynamics 365 / Power Platform, preference should be given towards a functional or \u0026ldquo;low code\u0026rdquo; solution where-ever possible. This approach is particularly correct for ribbon customisations, given their potential to modify significantly how the system behaves by default and their propensity to \u0026ldquo;break\u0026rdquo; if customised incorrectly.\nDemo: Using the Ribbon Workbench to Modify Command Bar Buttons To better understand how to use the Ribbon Workbench to modify the behaviour of an existing command button in the application, check out the below YouTube video, where I demonstrate how to do this from start to finish:\nAs we have seen in the past couple of posts in this series, developers can do a surprising amount of tinkering around with the user interface of not just model-driven Power Apps, but also within canvas apps as well. In the next post in the series, we\u0026rsquo;re going to jump into the Extend the Platform area of the exam and show you how you can develop C# plug-ins to perform simple or complex server-level actions targeting the Common Data Service.\n","date":"2020-06-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-working-with-command-buttons/","title":"Exam MB-400 Revision Notes: Working with Command Buttons"},{"content":"Anyone who has spent a considerable length of time working with virtual entities in Dynamics 365 Online / the Common Data Service will have their collection of war stories to share. A great concept on paper - namely, to allow you to surface read-only versions of external system as fully-fledged entities within the Common Data Service database - they are, regrettably, a finicky feature to start getting any mileage out of. As well as requiring a compliant OData V4 endpoint that surfaces your underlying data source (other methods are supported, but can be a nightmare to code and deploy), the actual setup process for your virtual entities can lead to hours of fruitless diagnosing. Additional frustration then surfaces, as we find ourselves contending with error messages like the one below:\nThere is nothing within this message to help us determine whether this is a virtual entity configuration issue or a problem with our OData Service itself. You can discount any issues falling into the latter by checking the following:\nThe endpoint is using v4 of the OData Standard within its implementation. Also, the endpoint should support the various system query options, such as select, filter etc. You may find that things \u0026ldquo;work\u0026rdquo; without these specified (i.e. data starts surfacing), but you will run into issues fairly quickly whenever you start filtering or returning subsections of the data needed. For each record returned from the endpoint, each row must contain a Globally Unique Identifier (GUID) field, that uniquely identifies the row in the dataset. Records must return a supported data type that the platform can translate into the appropriate Common Data Service data type. At the time of writing this post, the following data types are unsupported: Edm.Binary Edm.Time Edm.Float Edm.Single Edm.Int16 Edm.Byte Edm.SByte After eliminating all of these potential issues from the equation, you can then turn to the configuration settings of the virtual entity. The exact problem will typically vary per situation but, more often than not, can come down to a simple misconfiguration issue. In a recent example I dealt with, the problem turned out to be frustratingly obvious to resolve. To understand this further, we can take a look at the OData metadata properties of the collection I was attempting to surface as a virtual entity in the application:\nNote here that our Value field is a non-nullable type, thereby meaning it must always contain a value.\nNow, let\u0026rsquo;s compare this with what I had set up within the virtual entity for this corresponding field:\nAll critical aspects of this configuration - the External Name and Data Type, specifically - are correct. The issue here lies in the Field Requirement property. This setting must match against the nullability properties of the underlying OData column; which, as we have witnessed, should always contain a value - as such, attempting to execute queries against this entity will fail with the error message above, up until you change this property to Business Required. This blunder is genuinely one of those things that you kick yourself over after the fact, as it seems so obvious when you think about it logically!\nSo, to summarise, if you are pulling your hair out over why a virtual entity is just not working, there\u0026rsquo;s an excellent chance that its a simple mistake within its configuration. I\u0026rsquo;d, therefore, recommend you check the above and the following details within your setup, to get things working:\nYou are using the correct endpoint destination as part of your Virtual Entity Data Source record. This value should always be the base URL, without any collection name appended to it. The External Name and External Collection Name values should match against the OData entity collection name. The External Name values for each field defined on the virtual entity must match precisely against what has been recorded within the metadata. All fields must map to the correct Dynamics 365 data type, as described in this article. The base Name and Primary Key field created with your virtual entity must map across to an appropriate Edm.String and Edm.Guid field respectively. And, as mentioned earlier, the Primary Key value must always be unique within the collection dataset. Either one or several of these things will be the cause of your connectivity issue to your virtual entity. Unfortunately, because of how lacking the error messages are within the system, this is ultimately something that you have to spend time combing through manually to resolve. Microsoft does advise that errors should get written out to the plug-in trace log for inspection. I have so far been unable to get this to produce any meaningful output throughout my time working with virtual entities. Answers on a postcard if you\u0026rsquo;ve had any more joy in this area. All that it leaves me to say is good luck in your journey with virtual entities, and I hope that you find this post useful in diagnosing any potential issues you have with them 🙂\n","date":"2020-06-07T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/resolving-entity-could-not-be-retrieved-from-data-source-errors-with-virtual-entities-dynamics-365-common-data-service/","title":"Resolving 'Entity could not be retrieved from data source' Errors with Virtual Entities (Dynamics 365 / Common Data Service)"},{"content":"A great benefit when working with Azure Logic Apps in conjunction with Azure Functions is that setting up secure authentication between the two services is a cakewalk. Security considerations like these can be important if, for example, you don\u0026rsquo;t want anonymous callers triggering functions that may create or retrieve business-sensitive information. Microsoft provides step-by-step instructions on how to set this all up, all of which is useful. What\u0026rsquo;s less clear is whether we can replicate this setup within the confines of an Azure Template file. Regular readers of the blog should be familiar with this topic, as I\u0026rsquo;ve spoken previously about how to use templates to build out an Azure API Management solution leveraging Logic Apps and discussed the process for resolving pesky errors during deployments. For those still wondering exactly what they are, in a nutshell, Azure templates allow you to define the entire structure of your Azure estate as part of a raw, JSON definition file. Why is this beneficial? For a few reasons:\nTemplate files can be stored within your source control provider of choice, providing total visibility over changes made to your template at an insanely granular level. By using dependsOn sequencing within a template file, you can define the precise deployment order of your resources, and then cross-reference information across multiple resources. For example, you can create a Logic Apps connection profile resource and then leverage it within all of your Logic Apps within the same template file. By adopting template files alongside automated deployments, you can reduce the need to carry out manual intervention or changes to critical infrastructure; all of which introduces the inherent risk of human error. If you are using tools such as Azure DevOps Release Pipelines, it is easier said then done to get deployment pipelines setup. So the benefits of Azure Templates should be apparent. Which, as a consequence, makes it pretty desirable to use them to define Logic App and Functions that can securely communicate with each other. This post will show you how we can do this, thereby allowing you to forego the need for any manual configuration of these resources post-deployment\nFirst of all, make sure that you have enabled the managed identity for the Logic App that needs to communicate with the Azure Function. The code snippet below will enable the system-assigned variant of this, which should be suitable for most scenarios:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Logic/workflows\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2017-07-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyLogicApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;Identity\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;SystemAssigned\u0026#34; }, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites/functions\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;, \u0026#39;MyFunction\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;state\u0026#34;: \u0026#34;Enabled\u0026#34;, \u0026#34;definition\u0026#34;: { \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#\u0026#34;, \u0026#34;actions\u0026#34;: {}, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;outputs\u0026#34;: {}, \u0026#34;Parameters\u0026#34;: {}, \u0026#34;triggers\u0026#34;: {} }, \u0026#34;Parameters\u0026#34;: {} }, \u0026#34;resources\u0026#34;: [] } Within your Logic App definition itself, you should also have the appropriate action step that triggers your Azure Function. The step should resemble the below, tweaked accordingly to suit your scenario; however, the values in the authentication node are the crucial bits to getting this all working and should remain unchanged:\n{ \u0026#34;My_Function_App_Action\u0026#34;: { \u0026#34;inputs\u0026#34;: { \u0026#34;authentication\u0026#34;: { \u0026#34;audience\u0026#34;: \u0026#34;https://management.azure.com\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;ManagedServiceIdentity\u0026#34; }, \u0026#34;body\u0026#34;: \u0026#34;Sample Body Text\u0026#34;, \u0026#34;function\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;[concat(resourceid(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;, \u0026#39;/functions/MyFunction\u0026#39;)]\u0026#34; }, \u0026#34;queries\u0026#34;: { \u0026#34;myquery\u0026#34;: \u0026#34;My Query Value\u0026#34; } }, \u0026#34;runAfter\u0026#34;: {}, \u0026#34;type\u0026#34;: \u0026#34;Function\u0026#34; } } That\u0026rsquo;s everything you need for the Logic App. Next, we move across to the Function App itself. There\u0026rsquo;s nothing particularly noteworthy about the Microsoft.Web/sites template definition itself, which should resemble the examples Microsoft provide us. It\u0026rsquo;s when we get into the sub-resources that sit within this that things start to get interesting\u0026hellip;\nAttempting to create the Logic App specified earlier without the selected Azure Function existing already will cause a deployment error. As a consequence, we must also create this alongside our Logic App and Function during the same deployment, using the code snippet highlighted below.\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites/functions\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp/MyFunction\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyFunction\u0026#34;, \u0026#34;config\u0026#34;: { \u0026#34;bindings\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;httpTrigger\u0026#34;, \u0026#34;methods\u0026#34;: [ \u0026#34;post\u0026#34; ], \u0026#34;authLevel\u0026#34;: \u0026#34;anonymous\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;req\u0026#34; } ] } } } What this will effectively do is create a blank function in your app, that contains not a single line of code whatsoever. What you would then have to do - either from Visual Studio or as part of an Azure DevOps Release pipeline - is deploy this out to the app as a separate step. Although I believe it\u0026rsquo;s possible to include the actual code for your app within an Azure Template, I can\u0026rsquo;t seem to find a straightforward way to do this for a C# Function App. Answers on a postcard if you\u0026rsquo;ve figured out an easy way of doing this. 🙂 It\u0026rsquo;s also worth noting that the authLevel value of anonymous is a strict requirement for this solution to work. You should, therefore, take steps to ensure that your Function App is programmed to use this setting. Otherwise, you accept the risk of having this accidentally overwritten when you deploy it out.\nIn its current state so far, the Function is deployed and is accessible. However, because of the authLevel setting, any Tom, Dick or Harry the world over has free reign to access your Function App. To start to lock things down, therefore, we must enable a specific Authentication / Authorization configuration for the Function App, that will lock things down so that our Logic App is the only object that can interact with our app. We define this profile as part of a Microsoft.Web/sites/config resource within the Azure Template file, meaning that we can build out an additional sub-resource that sets everything up for us:\n{ \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp/authsettings\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites/config\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;MyLogicApp\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;enabled\u0026#34;: true, \u0026#34;unauthenticatedClientAction\u0026#34;: \u0026#34;RedirectToLoginPage\u0026#34;, \u0026#34;tokenStoreEnabled\u0026#34;: true, \u0026#34;defaultProvider\u0026#34;: \u0026#34;AzureActiveDirectory\u0026#34;, \u0026#34;clientId\u0026#34;: \u0026#34;[reference(resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;MyLogicApp\u0026#39;), \u0026#39;2017-07-01\u0026#39;, \u0026#39;full\u0026#39;).identity.principalId]\u0026#34;, \u0026#34;issuer\u0026#34;: \u0026#34;https://sts.windows.net/e275677a-5d5b-4057-8ab5-0e7bc594bc42\u0026#34;, \u0026#34;allowedAudiences\u0026#34;: [ \u0026#34;https://management.azure.com\u0026#34; ], \u0026#34;isAadAutoProvisioned\u0026#34;: false } } The only setting you will need to modify here is the issuer value, which needs to have the ID value of the current Azure Active Directory tenant you are deploying the template to. And, before you get too excited, thanks to some of the wonderful tools available online, the value supplied above is completely random. We must also tip our hat here to the excellent reference template function, which allows us to grab the Managed Identity ID value from a Logic App created as part of the same template deployment - nice!\nAnd, with that added on, we\u0026rsquo;ve got everything needed to secure the Function App correctly. For completeness, here is the entire snippet for the Function App and its related resources:\n{ \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-08-01\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;functionapp\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, \u0026#39;MyFunctionAppSA\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;siteConfig\u0026#34;: { \u0026#34;appSettings\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;AzureWebJobsStorage\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[concat(\u0026#39;DefaultEndpointsProtocol=https;AccountName=\u0026#39;, \u0026#39;MyFunctionAppSA\u0026#39;, \u0026#39;;EndpointSuffix=\u0026#39;, environment().suffixes.storage, \u0026#39;;AccountKey=\u0026#39;,listKeys(resourceId(\u0026#39;Microsoft.Storage/storageAccounts\u0026#39;, \u0026#39;MyFunctionAppSA\u0026#39;), \u0026#39;2019-06-01\u0026#39;).keys[0].value)]\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;FUNCTIONS_EXTENSION_VERSION\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;~2\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;WEBSITE_NODE_DEFAULT_VERSION\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;~10\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;FUNCTIONS_WORKER_RUNTIME\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;dotnet\u0026#34; } ] } }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites/functions\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp/MyFunction\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyFunction\u0026#34;, \u0026#34;config\u0026#34;: { \u0026#34;bindings\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;httpTrigger\u0026#34;, \u0026#34;methods\u0026#34;: [ \u0026#34;post\u0026#34; ], \u0026#34;authLevel\u0026#34;: \u0026#34;anonymous\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;req\u0026#34; } ] } } }, { \u0026#34;name\u0026#34;: \u0026#34;MyFunctionApp/authsettings\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites/config\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2018-11-01\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/sites\u0026#39;, \u0026#39;MyFunctionApp\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;MyLogicApp\u0026#39;)]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;enabled\u0026#34;: true, \u0026#34;unauthenticatedClientAction\u0026#34;: \u0026#34;RedirectToLoginPage\u0026#34;, \u0026#34;tokenStoreEnabled\u0026#34;: true, \u0026#34;defaultProvider\u0026#34;: \u0026#34;AzureActiveDirectory\u0026#34;, \u0026#34;clientId\u0026#34;: \u0026#34;[reference(resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;MyLogicApp\u0026#39;), \u0026#39;2017-07-01\u0026#39;, \u0026#39;full\u0026#39;).identity.principalId]\u0026#34;, \u0026#34;issuer\u0026#34;: \u0026#34;https://sts.windows.net/e275677a-5d5b-4057-8ab5-0e7bc594bc42\u0026#34;, \u0026#34;allowedAudiences\u0026#34;: [ \u0026#34;https://management.azure.com\u0026#34; ], \u0026#34;isAadAutoProvisioned\u0026#34;: false } }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Storage/storageAccounts\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyFunctionAppSA\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-04-01\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;StorageV2\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Standard_LRS\u0026#34; } } ] } Now, whenever we attempt to make an anonymous request to the endpoint, we should get an error message similar to the below:\nWhereas requests made from our Logic App will complete successfully:\nIt\u0026rsquo;s good to know that we can straightforwardly intuit the steps to set all this up, thanks to both the documentation available for the manual steps and our ability to fully interrogate the inner workings of Azure Resources, using tools such as PowerShell or the Resource Explorer. A potential limitation of this whole solution though is that it limits your Function App to communicate to a single Logic App only. As far as I know, there is no way to specify additional Managed Identities as part of the Authentication / Authorization configuration. You could get around this issue by having different Logic Apps calling a central Logic App, with this being the one that\u0026rsquo;s hooked up to your Azure Function. However, I\u0026rsquo;d hope that the solution outlined in this post is sufficient for most scenarios and provides the necessary assurance that we can secure our cloud resources appropriately, using the excellent capabilities included within Azure Active Directory.\n","date":"2020-05-31T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/securing-access-to-azure-functions-from-logic-apps-via-azure-templates/","title":"Securing Access to Azure Functions from Logic Apps via Azure Templates"},{"content":"One of the exciting announcements that came out earlier this week, in tandem with Microsoft Build 2020, was the introduction of a new Power Platform exam and corresponding certification, which will be released later on this year. Exam PL-100 will target those building applications on top of Power Apps and, for those fortunate enough to get a passing grade in this exam, you will automatically receive the Power Platform App Maker Associate certification. Given the breadth of the platform as a whole, of which Dynamics 365 forms a separate, but arguably integral part, this move makes logical sense. Canvas Apps, in particular, only feature a tiny amount within some of the exams available today. This state of affairs means it\u0026rsquo;s incredibly difficult to both assess candidates detailed knowledge in Power Apps and demonstrate that individuals have sufficient ability in this area to build out practical solutions. PL-100 will, I hope, address both of these concerns in the most effective way. Also, and it has to be said, any opportunity to try and earn a new shiny certification badge is one that I, and I suspect others, will wholly embrace. 🙂\nIn terms of what to expect from the exam itself, we can get an initial glimpse by reviewing the skills measured document, which breaks down what candidates need to know to achieve a passing grade. Having read through this in detail, I thought I\u0026rsquo;d highlight some of the areas of interest arising from this and indicate specific areas of attention if you are considering sitting this exam in future:\nAccessibility and localisation are in front of centre in this exam and also something you should consider anyway when it comes to designing your apps. Candidates should, therefore, familiarise themselves with the tools within canvas Power Apps to help identify and fix accessibility issues and also how you can localise model-driven apps on a per-language basis. You can achieve this by either using RESX files, installing new languages onto the tenant or by exporting/importing entity/field text translations. Solutions sit within the area of the exam with the most weighting, with candidates expected to know how to create one, alongside its corresponding solution publisher. It\u0026rsquo;s interesting to note here though that the specification does not mention anything about understanding the differences between unmanaged / managed solutions or how you can use the Solution Checker to identify issues with your developed components. This omission perhaps would indicate that these are not areas candidates should demonstrate knowledge in, but I would caution against ignoring them entirely as part of your revision. Understanding what I would term as basic entity customisation will be a necessary pre-requisite before sitting this exam. Specifically, you will need to know how to create entities, fields, relationships and how you can load data into your new entities - via the more traditional tools available within Dynamics 365 or through more modern mechanisms, such as data flows. Microsoft dedicates a whole section of the exam towards Dynamics 365 / Common Data Service processes - including Business Rules, Business Process Flows and classic workflows. As an integral component within any model-driven app, it is unsurprising that this is an area chosen for assessment and should not be one you take for granted during your revision. Power Automate flows also have a whole section dedicated to them, covering pretty much the entire development lifecycle of a flow. Do not neglect any of these areas and take time to understand the best usage scenario for each feature, based on their capabilities. For example, classic workflows can be executed in real-time, whereas flows cannot. Therefore, if the requirement is to apply any custom logic as soon as a user saves a Dynamics 365 / Common Data Service record, classic workflows will be the most logical choice. From the looks of it, the exam assesses on a relatively new, preview feature for canvas Power Apps, known as components. If you haven\u0026rsquo;t experimented around with these yet, then I would urge you to take a look, as they can significantly reduce the amount of time involved when building out multiple apps, by grouping together re-usable groupings of various controls. Although it forms a minimal part of the overall exam (between 5-10%), be sure to understand how SSRS reports function within Dynamics 365 / the Common Data Service and the potential usage cases for these compared with Power BI reports. An excellent example of this is that SSRS reports let you read data directly from the Common Data Service database in real-time and allow you to export out information into a variety of different formats. Although it again forms a relatively small part of the exam, I would urge you to set up a trial of the AI Builder capabilities within Power Apps and experiment around with their features within a canvas app. The object detector component is one that is both straightforward and quite fun to test further with, producing some intriguing results in the process. Continuing the general cross-over theme alluded to already, the exam will also assess candidates on the many of the security components within Dynamics 365 / the Common Data Service - including security roles and field-level security. Interspersed with this, you should also understand how to go about sharing a canvas or model-driven app, as there are a few gotchas here that may surprise you. Understanding the behaviour, and implications, of implicit connections, is also a key area I would draw your attention towards; not just for this exam, but for when you are implementing these solutions in the wild. For example, if your app is using a SQL Server connection, users leveraging this app will use the same connection details you have defined when building the app out. Version control of canvas apps is an area that traditional Dynamics 365 professionals should draw their attention towards, and is one that is strictly related to canvas Power Apps only. Spend some time understanding how versioning works, how apps are published out and also how you can restore your Power App to a previous version using the portal. As you can see, there is a lot to take into account with this new exam, and I suspect it will present a challenge for candidates when it is released. Hopefully, this post has given you a good insight into what to expect and hasn\u0026rsquo;t put you off in the process. I\u0026rsquo;m looking forward to sitting this exam as soon as it becomes available. Let me know your thoughts on the exam in the comments below and whether you plan to take it as well once released!\n","date":"2020-05-24T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/reviewing-the-new-power-platform-app-maker-associate-exam-certification/","title":"Reviewing the New Power Platform App Maker Associate Exam \u0026 Certification"},{"content":"The great thing when you are working with Azure Logic Apps is that, if you have previously spent any amount of time understanding Power Automate flows, the knowledge gained in this area is instantly transferable. That\u0026rsquo;s mainly because Power Automate is utilising Logic Apps underneath the hood. In practice, therefore, you can anticipate that a lot of the same trigger actions, connectors and general functionality will be identical. So why would you consider using Logic Apps at all in the first place? There are a few reasons why:\nLogic Apps are a consumption-based service, meaning you are charged based on the number of monthly executions. Depending on the volumes involved, this can come in significantly cheaper when compared to buying a Power Automate license. Logic Apps support the ability to execute within a dedicated location as part of integration service environments (ISE\u0026rsquo;s). It is far easier to incorporate Logic Apps development as part of your Application Lifecycle Management (ALM) processes, with the ability to do one-click extracts of deployment templates. What\u0026rsquo;s more, you have a high degree of control over the parameterisation, deployment location and general functionality of Logic Apps when compared with Power Automate flows. So in most senses, Logic Apps functionality can be argued as being on a par with that of Power Automate flows, often exceeding capabilities in some areas and being far better suited for large-scale, pro-developer integrations between systems.\nNow, I use the work argued, as I cannot reliably say that the functionality on offer is complete like-for-like, especially when it comes to working with Dynamics 365 / the Common Data Service. Now it is true that we have a Common Data Service connector within Logic Apps, that has a list of actions that is equivalent to what is available within Power Automate:\nHowever, there are two issues I can specifically highlight here:\nPower Automate introduced the Common Data Service (current environment) connector a while back, which does exactly what it says on the tin. Also, this connector offers several additional features/benefits, such as the ability to execute change set requests or upload files/images to an entity record. This connector is not exposed at all within Logic Apps, even if you are in the same tenant where your Common Data Service environments reside. Logic App users, therefore, lose some degree of functionality here compared to Power Automate flows. When using the List records action, you have access to all of the common OData filter queries, such as filter, top and orderby, thereby allowing you to return a specific subset of the data you require. However, you have no option here to indicate the precise fields you wish to return - via a select option or similar. This limitation can lead to your requests into the Common Data Service processing far more data than necessary and lengthen the execution time of your Logic Apps. You can get around the second of these issues straightforwardly enough by using a Parse JSON action to consume and then remove any fields you don\u0026rsquo;t wish to process any further as part of your Logic App. But what if the data you wish to process from the Common Data Service is variable? For example, let\u0026rsquo;s assume that the entity and list of fields to be processed differs each time the Logic App runs - dictated by either the creation of a database record or by the details of a request passed to an HTTP endpoint setup on the Logic App. In this instance, it is not possible to use the Parse JSON action to parameterise this information satisfactorily. Indeed, to the best of my knowledge, it is not possible to utilise any of the collection/data operation functions available via the Workflow Definition Language to perform this action instead. Fortunately, where there is a will - and a bit of knowledge of C# - there is a way ;)\nLogic Apps have had the longstanding capability to integrate alongside Azure Functions, thereby allowing developers to off-load any complex processing to a programming interface that can do almost anything you can imagine. So, in this case, Azure Functions becomes our rescuer and allows us to build out a basic Function App to support us. We can design the function app to accept two core bits of information:\nA JSON array containing the response from Common Data Service. The JSON\u0026rsquo;s basic structure should look somewhat similar to the example below, representing two Lead records, for the function to parse it correctly. As you can see, this is a LOT of data that Logic Apps returns by default, for a measly 2 Lead records: [ { \u0026#34;@odata.id\u0026#34;: \u0026#34;https://mycrminstance.crm11.dynamics.com/api/data/v9.0/leads(cd4299fe-a080-ea11-a811-002248012503)\u0026#34;, \u0026#34;@odata.etag\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;ItemInternalId\u0026#34;: \u0026#34;cd4299fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;prioritycode\u0026#34;: 1, \u0026#34;_prioritycode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;address2_addresstypecode\u0026#34;: 1, \u0026#34;_address2_addresstypecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;merged\u0026#34;: false, \u0026#34;emailaddress1\u0026#34;: \u0026#34;someonel1@example.com\u0026#34;, \u0026#34;confirminterest\u0026#34;: false, \u0026#34;numberofemployees\u0026#34;: 200, \u0026#34;decisionmaker\u0026#34;: false, \u0026#34;msdyn_ordertype\u0026#34;: 192350000, \u0026#34;_msdyn_ordertype_label\u0026#34;: \u0026#34;Item based\u0026#34;, \u0026#34;modifiedon\u0026#34;: \u0026#34;2020-04-17T11:46:57Z\u0026#34;, \u0026#34;importsequencenumber\u0026#34;: 1, \u0026#34;address1_composite\u0026#34;: \u0026#34;789 Jones Blvd\\r\\nLa Vergne, TN 57332\\r\\nU.S.\u0026#34;, \u0026#34;lastname\u0026#34;: \u0026#34;McKay (sample)\u0026#34;, \u0026#34;donotpostalmail\u0026#34;: false, \u0026#34;revenue_base\u0026#34;: 100000.0, \u0026#34;preferredcontactmethodcode\u0026#34;: 1, \u0026#34;_preferredcontactmethodcode_label\u0026#34;: \u0026#34;Any\u0026#34;, \u0026#34;_ownerid_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_ownerid_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;_campaignid_value\u0026#34;: \u0026#34;e14099fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;_campaignid_type\u0026#34;: \u0026#34;campaigns\u0026#34;, \u0026#34;firstname\u0026#34;: \u0026#34;Yvonne\u0026#34;, \u0026#34;evaluatefit\u0026#34;: false, \u0026#34;yomifullname\u0026#34;: \u0026#34;Yvonne McKay (sample)\u0026#34;, \u0026#34;donotemail\u0026#34;: false, \u0026#34;fullname\u0026#34;: \u0026#34;Yvonne McKay (sample)\u0026#34;, \u0026#34;msdyn_gdproptout\u0026#34;: false, \u0026#34;statuscode\u0026#34;: 1, \u0026#34;_statuscode_label\u0026#34;: \u0026#34;New\u0026#34;, \u0026#34;createdon\u0026#34;: \u0026#34;2020-04-17T11:46:57Z\u0026#34;, \u0026#34;address1_stateorprovince\u0026#34;: \u0026#34;TN\u0026#34;, \u0026#34;companyname\u0026#34;: \u0026#34;Fourth Coffee (sample)\u0026#34;, \u0026#34;donotfax\u0026#34;: false, \u0026#34;leadsourcecode\u0026#34;: 1, \u0026#34;_leadsourcecode_label\u0026#34;: \u0026#34;Advertisement\u0026#34;, \u0026#34;jobtitle\u0026#34;: \u0026#34;Purchasing Manager\u0026#34;, \u0026#34;address1_country\u0026#34;: \u0026#34;U.S.\u0026#34;, \u0026#34;versionnumber\u0026#34;: 11127594, \u0026#34;address1_line1\u0026#34;: \u0026#34;789 Jones Blvd\u0026#34;, \u0026#34;telephone1\u0026#34;: \u0026#34;555-0146\u0026#34;, \u0026#34;donotsendmm\u0026#34;: false, \u0026#34;leadqualitycode\u0026#34;: 2, \u0026#34;_leadqualitycode_label\u0026#34;: \u0026#34;Warm\u0026#34;, \u0026#34;donotphone\u0026#34;: false, \u0026#34;_transactioncurrencyid_value\u0026#34;: \u0026#34;d85b13bb-081f-ea11-a812-00224801bc51\u0026#34;, \u0026#34;_transactioncurrencyid_type\u0026#34;: \u0026#34;transactioncurrencies\u0026#34;, \u0026#34;subject\u0026#34;: \u0026#34;New store opened this year - follow up (sample)\u0026#34;, \u0026#34;address1_addresstypecode\u0026#34;: 1, \u0026#34;_address1_addresstypecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;donotbulkemail\u0026#34;: false, \u0026#34;exchangerate\u0026#34;: 1.0, \u0026#34;_modifiedby_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_modifiedby_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;followemail\u0026#34;: true, \u0026#34;leadid\u0026#34;: \u0026#34;cd4299fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;_createdby_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_createdby_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;websiteurl\u0026#34;: \u0026#34;http://www.fourthcoffee.com/\u0026#34;, \u0026#34;address1_city\u0026#34;: \u0026#34;La Vergne\u0026#34;, \u0026#34;salesstagecode\u0026#34;: 1, \u0026#34;_salesstagecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;revenue\u0026#34;: 100000.0, \u0026#34;participatesinworkflow\u0026#34;: false, \u0026#34;statecode\u0026#34;: 0, \u0026#34;_statecode_label\u0026#34;: \u0026#34;Open\u0026#34;, \u0026#34;_owningbusinessunit_value\u0026#34;: \u0026#34;9941c9c6-f71e-ea11-a812-00224801bc51\u0026#34;, \u0026#34;_owningbusinessunit_type\u0026#34;: \u0026#34;businessunits\u0026#34;, \u0026#34;address1_postalcode\u0026#34;: \u0026#34;57332\u0026#34;, \u0026#34;budgetamount_base\u0026#34;: null, \u0026#34;salutation\u0026#34;: null, \u0026#34;address1_latitude\u0026#34;: null, \u0026#34;address1_fax\u0026#34;: null, \u0026#34;sic\u0026#34;: null, \u0026#34;yomilastname\u0026#34;: null, \u0026#34;address1_longitude\u0026#34;: null, \u0026#34;telephone2\u0026#34;: null, \u0026#34;timespentbymeonemailandmeetings\u0026#34;: null, \u0026#34;address1_upszone\u0026#34;: null, \u0026#34;schedulefollowup_qualify\u0026#34;: null, \u0026#34;_slainvokedid_value\u0026#34;: null, \u0026#34;schedulefollowup_prospect\u0026#34;: null, \u0026#34;purchasetimeframe\u0026#34;: null, \u0026#34;_purchasetimeframe_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;_owningteam_value\u0026#34;: null, \u0026#34;industrycode\u0026#34;: null, \u0026#34;_industrycode_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;budgetstatus\u0026#34;: null, \u0026#34;_budgetstatus_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;stageid\u0026#34;: null, \u0026#34;_accountid_value\u0026#34;: null, \u0026#34;lastonholdtime\u0026#34;: null, \u0026#34;address2_country\u0026#34;: null, \u0026#34;address1_utcoffset\u0026#34;: null, \u0026#34;onholdtime\u0026#34;: null, \u0026#34;address1_line2\u0026#34;: null, \u0026#34;_createdonbehalfby_value\u0026#34;: null, \u0026#34;telephone3\u0026#34;: null, \u0026#34;fax\u0026#34;: null, \u0026#34;emailaddress2\u0026#34;: null, \u0026#34;_parentcontactid_value\u0026#34;: null, \u0026#34;businesscard\u0026#34;: null, \u0026#34;estimatedamount\u0026#34;: null, \u0026#34;address2_telephone1\u0026#34;: null, \u0026#34;description\u0026#34;: null, \u0026#34;overriddencreatedon\u0026#34;: null, \u0026#34;address2_county\u0026#34;: null, \u0026#34;address2_stateorprovince\u0026#34;: null, \u0026#34;_masterid_value\u0026#34;: null, \u0026#34;_qualifyingopportunityid_value\u0026#34;: null, \u0026#34;address2_city\u0026#34;: null, \u0026#34;lastusedincampaign\u0026#34;: null, \u0026#34;address2_composite\u0026#34;: null, \u0026#34;_originatingcaseid_value\u0026#34;: null, \u0026#34;utcconversiontimezonecode\u0026#34;: null, \u0026#34;purchaseprocess\u0026#34;: null, \u0026#34;_purchaseprocess_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;address2_line3\u0026#34;: null, \u0026#34;emailaddress3\u0026#34;: null, \u0026#34;salesstage\u0026#34;: null, \u0026#34;_salesstage_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;_relatedobjectid_value\u0026#34;: null, \u0026#34;processid\u0026#34;: null, \u0026#34;pager\u0026#34;: null, \u0026#34;address2_name\u0026#34;: null, \u0026#34;address2_upszone\u0026#34;: null, \u0026#34;_modifiedonbehalfby_value\u0026#34;: null, \u0026#34;mobilephone\u0026#34;: null, \u0026#34;initialcommunication\u0026#34;: null, \u0026#34;_initialcommunication_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;address2_latitude\u0026#34;: null, \u0026#34;qualificationcomments\u0026#34;: null, \u0026#34;address1_name\u0026#34;: null, \u0026#34;middlename\u0026#34;: null, \u0026#34;address1_telephone2\u0026#34;: null, \u0026#34;address2_utcoffset\u0026#34;: null, \u0026#34;entityimage\u0026#34;: null, \u0026#34;_parentaccountid_value\u0026#34;: null, \u0026#34;address1_telephone1\u0026#34;: null, \u0026#34;address1_county\u0026#34;: null, \u0026#34;address1_line3\u0026#34;: null, \u0026#34;yomicompanyname\u0026#34;: null, \u0026#34;need\u0026#34;: null, \u0026#34;_need_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;yomimiddlename\u0026#34;: null, \u0026#34;entityimage_url\u0026#34;: null, \u0026#34;entityimageid\u0026#34;: null, \u0026#34;address2_telephone3\u0026#34;: null, \u0026#34;_slaid_value\u0026#34;: null, \u0026#34;address2_line1\u0026#34;: null, \u0026#34;address2_postofficebox\u0026#34;: null, \u0026#34;address2_longitude\u0026#34;: null, \u0026#34;address2_telephone2\u0026#34;: null, \u0026#34;address2_fax\u0026#34;: null, \u0026#34;traversedpath\u0026#34;: null, \u0026#34;address2_line2\u0026#34;: null, \u0026#34;businesscardattributes\u0026#34;: null, \u0026#34;address1_postofficebox\u0026#34;: null, \u0026#34;entityimage_timestamp\u0026#34;: null, \u0026#34;_customerid_value\u0026#34;: null, \u0026#34;teamsfollowed\u0026#34;: null, \u0026#34;yomifirstname\u0026#34;: null, \u0026#34;timezoneruleversionnumber\u0026#34;: null, \u0026#34;_contactid_value\u0026#34;: null, \u0026#34;address1_telephone3\u0026#34;: null, \u0026#34;estimatedvalue\u0026#34;: null, \u0026#34;estimatedclosedate\u0026#34;: null, \u0026#34;address2_postalcode\u0026#34;: null, \u0026#34;budgetamount\u0026#34;: null, \u0026#34;estimatedamount_base\u0026#34;: null }, { \u0026#34;@odata.id\u0026#34;: \u0026#34;https://mycrminstance.crm11.dynamics.com/api/data/v9.0/leads(cf4299fe-a080-ea11-a811-002248012503)\u0026#34;, \u0026#34;@odata.etag\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;ItemInternalId\u0026#34;: \u0026#34;cf4299fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;prioritycode\u0026#34;: 1, \u0026#34;_prioritycode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;address2_addresstypecode\u0026#34;: 1, \u0026#34;_address2_addresstypecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;merged\u0026#34;: false, \u0026#34;emailaddress1\u0026#34;: \u0026#34;someonel2@example.com\u0026#34;, \u0026#34;confirminterest\u0026#34;: false, \u0026#34;numberofemployees\u0026#34;: 2000, \u0026#34;decisionmaker\u0026#34;: false, \u0026#34;msdyn_ordertype\u0026#34;: 192350000, \u0026#34;_msdyn_ordertype_label\u0026#34;: \u0026#34;Item based\u0026#34;, \u0026#34;modifiedon\u0026#34;: \u0026#34;2020-04-17T11:46:58Z\u0026#34;, \u0026#34;importsequencenumber\u0026#34;: 1, \u0026#34;address1_composite\u0026#34;: \u0026#34;797 Roosevelt Ave NE\\r\\nSaint Louis, MO 83385\\r\\nU.S.\u0026#34;, \u0026#34;lastname\u0026#34;: \u0026#34;Stubberod (sample)\u0026#34;, \u0026#34;donotpostalmail\u0026#34;: false, \u0026#34;revenue_base\u0026#34;: 150000.0, \u0026#34;preferredcontactmethodcode\u0026#34;: 1, \u0026#34;_preferredcontactmethodcode_label\u0026#34;: \u0026#34;Any\u0026#34;, \u0026#34;_ownerid_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_ownerid_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;_campaignid_value\u0026#34;: \u0026#34;e14099fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;_campaignid_type\u0026#34;: \u0026#34;campaigns\u0026#34;, \u0026#34;firstname\u0026#34;: \u0026#34;Susanna\u0026#34;, \u0026#34;evaluatefit\u0026#34;: false, \u0026#34;yomifullname\u0026#34;: \u0026#34;Susanna Stubberod (sample)\u0026#34;, \u0026#34;donotemail\u0026#34;: false, \u0026#34;fullname\u0026#34;: \u0026#34;Susanna Stubberod (sample)\u0026#34;, \u0026#34;msdyn_gdproptout\u0026#34;: false, \u0026#34;statuscode\u0026#34;: 1, \u0026#34;_statuscode_label\u0026#34;: \u0026#34;New\u0026#34;, \u0026#34;createdon\u0026#34;: \u0026#34;2020-04-17T11:46:58Z\u0026#34;, \u0026#34;address1_stateorprovince\u0026#34;: \u0026#34;MO\u0026#34;, \u0026#34;companyname\u0026#34;: \u0026#34;Litware, Inc. (sample)\u0026#34;, \u0026#34;donotfax\u0026#34;: false, \u0026#34;leadsourcecode\u0026#34;: 7, \u0026#34;_leadsourcecode_label\u0026#34;: \u0026#34;Trade Show\u0026#34;, \u0026#34;jobtitle\u0026#34;: \u0026#34;Purchasing Manager\u0026#34;, \u0026#34;address1_country\u0026#34;: \u0026#34;U.S.\u0026#34;, \u0026#34;versionnumber\u0026#34;: 11127601, \u0026#34;address1_line1\u0026#34;: \u0026#34;797 Roosevelt Ave NE\u0026#34;, \u0026#34;telephone1\u0026#34;: \u0026#34;555-0127\u0026#34;, \u0026#34;donotsendmm\u0026#34;: false, \u0026#34;leadqualitycode\u0026#34;: 1, \u0026#34;_leadqualitycode_label\u0026#34;: \u0026#34;Hot\u0026#34;, \u0026#34;donotphone\u0026#34;: false, \u0026#34;_transactioncurrencyid_value\u0026#34;: \u0026#34;d85b13bb-081f-ea11-a812-00224801bc51\u0026#34;, \u0026#34;_transactioncurrencyid_type\u0026#34;: \u0026#34;transactioncurrencies\u0026#34;, \u0026#34;subject\u0026#34;: \u0026#34;Mailed an interest card back (sample)\u0026#34;, \u0026#34;address1_addresstypecode\u0026#34;: 1, \u0026#34;_address1_addresstypecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;donotbulkemail\u0026#34;: false, \u0026#34;exchangerate\u0026#34;: 1.0, \u0026#34;_modifiedby_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_modifiedby_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;followemail\u0026#34;: true, \u0026#34;leadid\u0026#34;: \u0026#34;cf4299fe-a080-ea11-a811-002248012503\u0026#34;, \u0026#34;_createdby_value\u0026#34;: \u0026#34;9ec35ef0-9ec9-4e2e-b4af-e16bf5ba3b9b\u0026#34;, \u0026#34;_createdby_type\u0026#34;: \u0026#34;systemusers\u0026#34;, \u0026#34;websiteurl\u0026#34;: \u0026#34;http://www.litwareinc.com/\u0026#34;, \u0026#34;address1_city\u0026#34;: \u0026#34;Saint Louis\u0026#34;, \u0026#34;salesstagecode\u0026#34;: 1, \u0026#34;_salesstagecode_label\u0026#34;: \u0026#34;Default Value\u0026#34;, \u0026#34;revenue\u0026#34;: 150000.0, \u0026#34;participatesinworkflow\u0026#34;: false, \u0026#34;statecode\u0026#34;: 0, \u0026#34;_statecode_label\u0026#34;: \u0026#34;Open\u0026#34;, \u0026#34;_owningbusinessunit_value\u0026#34;: \u0026#34;9941c9c6-f71e-ea11-a812-00224801bc51\u0026#34;, \u0026#34;_owningbusinessunit_type\u0026#34;: \u0026#34;businessunits\u0026#34;, \u0026#34;address1_postalcode\u0026#34;: \u0026#34;83385\u0026#34;, \u0026#34;budgetamount_base\u0026#34;: 3000.0, \u0026#34;salutation\u0026#34;: null, \u0026#34;address1_latitude\u0026#34;: null, \u0026#34;address1_fax\u0026#34;: null, \u0026#34;sic\u0026#34;: null, \u0026#34;yomilastname\u0026#34;: null, \u0026#34;address1_longitude\u0026#34;: null, \u0026#34;telephone2\u0026#34;: null, \u0026#34;timespentbymeonemailandmeetings\u0026#34;: null, \u0026#34;address1_upszone\u0026#34;: null, \u0026#34;schedulefollowup_qualify\u0026#34;: null, \u0026#34;_slainvokedid_value\u0026#34;: null, \u0026#34;schedulefollowup_prospect\u0026#34;: null, \u0026#34;purchasetimeframe\u0026#34;: 2, \u0026#34;_purchasetimeframe_label\u0026#34;: \u0026#34;Next Quarter\u0026#34;, \u0026#34;_owningteam_value\u0026#34;: null, \u0026#34;industrycode\u0026#34;: null, \u0026#34;_industrycode_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;budgetstatus\u0026#34;: null, \u0026#34;_budgetstatus_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;stageid\u0026#34;: null, \u0026#34;_accountid_value\u0026#34;: null, \u0026#34;lastonholdtime\u0026#34;: null, \u0026#34;address2_country\u0026#34;: null, \u0026#34;address1_utcoffset\u0026#34;: null, \u0026#34;onholdtime\u0026#34;: null, \u0026#34;address1_line2\u0026#34;: null, \u0026#34;_createdonbehalfby_value\u0026#34;: null, \u0026#34;telephone3\u0026#34;: null, \u0026#34;fax\u0026#34;: null, \u0026#34;emailaddress2\u0026#34;: null, \u0026#34;_parentcontactid_value\u0026#34;: null, \u0026#34;businesscard\u0026#34;: null, \u0026#34;estimatedamount\u0026#34;: null, \u0026#34;address2_telephone1\u0026#34;: null, \u0026#34;description\u0026#34;: null, \u0026#34;overriddencreatedon\u0026#34;: null, \u0026#34;address2_county\u0026#34;: null, \u0026#34;address2_stateorprovince\u0026#34;: null, \u0026#34;_masterid_value\u0026#34;: null, \u0026#34;_qualifyingopportunityid_value\u0026#34;: null, \u0026#34;address2_city\u0026#34;: null, \u0026#34;lastusedincampaign\u0026#34;: null, \u0026#34;address2_composite\u0026#34;: null, \u0026#34;_originatingcaseid_value\u0026#34;: null, \u0026#34;utcconversiontimezonecode\u0026#34;: null, \u0026#34;purchaseprocess\u0026#34;: 1, \u0026#34;_purchaseprocess_label\u0026#34;: \u0026#34;Committee\u0026#34;, \u0026#34;address2_line3\u0026#34;: null, \u0026#34;emailaddress3\u0026#34;: null, \u0026#34;salesstage\u0026#34;: null, \u0026#34;_salesstage_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;_relatedobjectid_value\u0026#34;: null, \u0026#34;processid\u0026#34;: null, \u0026#34;pager\u0026#34;: null, \u0026#34;address2_name\u0026#34;: null, \u0026#34;address2_upszone\u0026#34;: null, \u0026#34;_modifiedonbehalfby_value\u0026#34;: null, \u0026#34;mobilephone\u0026#34;: null, \u0026#34;initialcommunication\u0026#34;: null, \u0026#34;_initialcommunication_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;address2_latitude\u0026#34;: null, \u0026#34;qualificationcomments\u0026#34;: null, \u0026#34;address1_name\u0026#34;: null, \u0026#34;middlename\u0026#34;: null, \u0026#34;address1_telephone2\u0026#34;: null, \u0026#34;address2_utcoffset\u0026#34;: null, \u0026#34;entityimage\u0026#34;: null, \u0026#34;_parentaccountid_value\u0026#34;: null, \u0026#34;address1_telephone1\u0026#34;: null, \u0026#34;address1_county\u0026#34;: null, \u0026#34;address1_line3\u0026#34;: null, \u0026#34;yomicompanyname\u0026#34;: null, \u0026#34;need\u0026#34;: null, \u0026#34;_need_label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;yomimiddlename\u0026#34;: null, \u0026#34;entityimage_url\u0026#34;: null, \u0026#34;entityimageid\u0026#34;: null, \u0026#34;address2_telephone3\u0026#34;: null, \u0026#34;_slaid_value\u0026#34;: null, \u0026#34;address2_line1\u0026#34;: null, \u0026#34;address2_postofficebox\u0026#34;: null, \u0026#34;address2_longitude\u0026#34;: null, \u0026#34;address2_telephone2\u0026#34;: null, \u0026#34;address2_fax\u0026#34;: null, \u0026#34;traversedpath\u0026#34;: null, \u0026#34;address2_line2\u0026#34;: null, \u0026#34;businesscardattributes\u0026#34;: null, \u0026#34;address1_postofficebox\u0026#34;: null, \u0026#34;entityimage_timestamp\u0026#34;: null, \u0026#34;_customerid_value\u0026#34;: null, \u0026#34;teamsfollowed\u0026#34;: null, \u0026#34;yomifirstname\u0026#34;: null, \u0026#34;timezoneruleversionnumber\u0026#34;: null, \u0026#34;_contactid_value\u0026#34;: null, \u0026#34;address1_telephone3\u0026#34;: null, \u0026#34;estimatedvalue\u0026#34;: null, \u0026#34;estimatedclosedate\u0026#34;: null, \u0026#34;address2_postalcode\u0026#34;: null, \u0026#34;budgetamount\u0026#34;: 3000.0, \u0026#34;estimatedamount_base\u0026#34;: null } ] A query parameter header - called attributes - containing the list of attributes we wish to return as a comma-separated list e.g. fullname,companyname You can then feed these two bits of information into an Azure Function endpoint, with the complete code for this illustrated below:\nusing System.IO; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Newtonsoft.Json; using Newtonsoft.Json.Linq; using System.Linq; using System.Collections.Generic; namespace APIHelper.LogicApps { public static class SelectJSONAttributes { [FunctionName(\u0026#34;SelectJSONAttributes\u0026#34;)] public static async Task\u0026lt;IActionResult\u0026gt; Run( [HttpTrigger(AuthorizationLevel.Anonymous, \u0026#34;post\u0026#34;, Route = null)] HttpRequest req, ILogger log) { log.LogInformation(\u0026#34;C# HTTP trigger function processed a request.\u0026#34;); string attributes = req.Query[\u0026#34;attributes\u0026#34;]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); try { JArray json = JArray.Parse(requestBody); attributes = attributes ?? json?.ToString(); if (attributes == null) { return new BadRequestObjectResult(\u0026#34;The \u0026#39;attributes\u0026#39; query parameter is missing from the request.\u0026#34;); } foreach(JObject item in json.Children()) { IList\u0026lt;string\u0026gt; keys = item.Properties().Select(p =\u0026gt; p.Name).ToList(); IList\u0026lt;string\u0026gt; retainKeys = attributes.Split(\u0026#39;,\u0026#39;); IList\u0026lt;string\u0026gt; removeKeys = keys.Except(retainKeys).ToList(); foreach(string attribute in removeKeys) { item.Property(attribute).Remove(); } } return new OkObjectResult(json); } catch(JsonReaderException e) { return new BadRequestObjectResult(e.Message); } catch (JsonSerializationException e) { return new BadRequestObjectResult(e.Message); } } } } To then add this onto your Logic Apps after deploying out to your function app, there are a few additional steps you\u0026rsquo;ll need to do:\nEnable the managed identity for your Logic App. I recommend using the system-assigned option, as it makes the setup process a whole lot easier. One enabled, this managed identity object needs to be granted Contributor permission or higher onto the Azure Function App. When setting up the Logic App, ensure that the authentication settings mirror the options indicated below: With all that done, we can test the Logic App and confirm everything works as expected:\nIt\u0026rsquo;s working! 🙂 And thanks to the capabilities built into the Newtonsoft JSON framework, the amount of actual code written is kept to an absolute minimum.\nNow, even though I am blogging about this solution, I must highlight that I am not an overall fan and am frustrated slightly that Logic Apps does not have a way around this. Answers on a postcard if you think I have missed something glaringly obvious but, for now, and until the Common Data Service connector exposes out an appropriate select column option, this is the best solution we have available to us. Ultimately, we should take solace in the fact that Azure Functions are blisteringly easy to get set up and running. Also, for this particular scenario, they provide a streamlined way for us to leverage the low-code capabilities of Logic Apps and dip into more complex, bespoke processing, without necessarily needing to throw Logic Apps out of the window in the first place.\n","date":"2020-05-17T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/using-logic-apps-azure-functions-to-parse-variable-common-data-service-entity-data/","title":"Using Logic Apps \u0026 Azure Functions to Parse Variable Common Data Service Entity Data"},{"content":"Azure Logic Apps are a great tool to look at when addressing simple or complex system integration requirements, and one which experienced Power Platform developers should instantly be familiar with, given they form the backbone of Power Automate flows. As a tool with such a wide array of connectors and features, they can often negate the need to write bespoke code to achieve your particular integration scenario. They are also an excellent candidate to consider when you need to expose out HTTP endpoints for requests to be processed through, thanks to the HTTP request trigger. Incidentally, this is also available as part of Power Automate flows too.\nAs powerful as HTTP trigger requests are, you will need to rely on other Azure solutions to harden the security aspects around this. For example, it is impossible to currently leverage OAuth 2.0 / Azure Active Directory (AAD) authentication within Logic Apps themselves. Instead, we must turn to solutions like Azure API Management (APIM) to meet this requirement. Additional challenges can also arise if you are adopting an Infrastructure as code mindset, and wish to have all of your Azure resources stored within a managed template, that you can then use to automate your solution deployments. Although Azure allows you to export out the definitions of an APIM resource from within the portal, the resulting template is missing several crucial components, meaning that you will have considerable difficulties importing this back in at a future date.\nBearing all this in mind, and some of the challenges I had when dealing with a similar requirement \u0026ldquo;in the field\u0026rdquo;, I thought it might be useful to share aspects of the solution that came out of this. Therefore, the focus for today\u0026rsquo;s blog post will be twofold:\nTo demonstrate how Azure Logic Apps can be leveraged alongside APIM, using OAuth 2.0 / AAD authentication to validate all incoming requests. To break down how to create a complete APIM API endpoint using an Azure Resource Manager template. The scenario we will work through assumes that you already have an Azure Logic App resource set up within the same Azure template file and that, also, you plan to deploy this to the same subscription/resource group as your APIM resource. For this example, we will assume that our Logic App resource is called mylogicapp and that it satisfies these conditions already. If this is not the case, then you may need to review the template further and replace instances where the functions resourceGroup().name and subscription().subscriptionId are used with hardcoded values instead. With all this out of the way, let us begin! 🤓\nThe API Management Resource Itself To kick things off, we must first ensure our top-level APIM resource exists in our targeted subscription or that you\u0026rsquo;ve added it already as part of a template. We will opt for the latter in this instance. Also, because we are feeling particularly wallet-conscious these days, we will create this resource using the consumption-based offering and configure the resource only to deploy once our Logic App resource has deployed successfully:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Consumption\u0026#34;, \u0026#34;capacity\u0026#34;: 0 }, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;mylogicapp\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;API Management Service Sample\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;publisherEmail\u0026#34;: \u0026#34;johnsmith@domain.com\u0026#34;, \u0026#34;publisherName\u0026#34;: \u0026#34;Company ABC\u0026#34;, \u0026#34;hostnameConfigurations\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Proxy\u0026#34;, \u0026#34;hostName\u0026#34;: \u0026#34;myapim.azure-api.net\u0026#34;, \u0026#34;negotiateClientCertificate\u0026#34;: false, \u0026#34;defaultSslBinding\u0026#34;: true } ], \u0026#34;customProperties\u0026#34;: { \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10\u0026#34;: \u0026#34;False\u0026#34;, \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11\u0026#34;: \u0026#34;False\u0026#34;, \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls10\u0026#34;: \u0026#34;False\u0026#34;, \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls11\u0026#34;: \u0026#34;False\u0026#34;, \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Ssl30\u0026#34;: \u0026#34;False\u0026#34;, \u0026#34;Microsoft.WindowsAzure.ApiManagement.Gateway.Protocols.Server.Http2\u0026#34;: \u0026#34;False\u0026#34; }, \u0026#34;virtualNetworkType\u0026#34;: \u0026#34;None\u0026#34;, \u0026#34;apiVersionConstraint\u0026#34;: {} } } Note that this will just create an empty APIM resource, with nothing specified underneath it. The sections that follow will build out APIM further with everything we need.\nOAuth 2.0 Configuration Profile The first sub-resource (if we can call it that) will contain the required settings for OAuth 2.0 authentication. This component goes first because it will be a required dependency as part of the API interface itself. As part of this, you will need to go through the steps outlined in this article to generate all of the required Application Registrations, URL\u0026rsquo;s, secrets and scopes. With all that out of the way, the RM template definition would resemble the below:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/authorizationServers\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/my-aad\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;OAuth 2.0 Configuration Profile\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;My AAD\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;OAuth2 authorization service for Azure Active Directory (AAD) tenant.\u0026#34;, \u0026#34;clientRegistrationEndpoint\u0026#34;: \u0026#34;http://localhost\u0026#34;, \u0026#34;authorizationEndpoint\u0026#34;: \u0026#34;https://login.microsoftonline.com/c77948f0-6777-43af-8daf-6b36a1911ee0/oauth2/v2.0/authorize\u0026#34;, \u0026#34;authorizationMethods\u0026#34;: [ \u0026#34;GET\u0026#34;, \u0026#34;POST\u0026#34; ], \u0026#34;clientAuthenticationMethod\u0026#34;: [ \u0026#34;Body\u0026#34; ], \u0026#34;tokenBodyParameters\u0026#34;: [], \u0026#34;tokenEndpoint\u0026#34;: \u0026#34;https://login.microsoftonline.com/c77948f0-6777-43af-8daf-6b36a1911ee0/oauth2/v2.0/token\u0026#34;, \u0026#34;supportState\u0026#34;: false, \u0026#34;defaultScope\u0026#34;: \u0026#34;api://fb10fac9-3aa4-41e3-be2e-5fad73beca13/API.AllOperations\u0026#34;, \u0026#34;grantTypes\u0026#34;: [ \u0026#34;clientCredentials\u0026#34; ], \u0026#34;bearerTokenSendingMethods\u0026#34;: [ \u0026#34;authorizationHeader\u0026#34; ], \u0026#34;clientId\u0026#34;: \u0026#34;7949079d-082d-4d37-bc71-898eafbd50c5\u0026#34;, \u0026#34;clientSecret\u0026#34;: \u0026#34;myclientsecret\u0026#34; } } The main properties you will have to update include:\nauthorizationEndpoint clientRegistrationEndpoint (where applicable and depending on what your app is doing) tokenEndpoint grantTypes (if you don\u0026rsquo;t wish to use client_credentials) clientId clientSecret defaultScope API Interface With our OAuth 2.0 authentication profile specified, we have all the necessary dependencies in place to build out our single API interface, which we will call My API and which will support https calls only. The template below will create this for us:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/apis\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/myapi\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/authorizationServers\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;my-aad\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;My API\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;My API\u0026#34;, \u0026#34;subscriptionRequired\u0026#34;: false, \u0026#34;protocols\u0026#34;: [ \u0026#34;https\u0026#34; ], \u0026#34;authenticationSettings\u0026#34;: { \u0026#34;oAuth2\u0026#34;: { \u0026#34;authorizationServerId\u0026#34;: \u0026#34;my-aad\u0026#34;, \u0026#34;scope\u0026#34;: null }, \u0026#34;openid\u0026#34;: null }, \u0026#34;isCurrent\u0026#34;: true, \u0026#34;path\u0026#34;: \u0026#34;\u0026#34; } } Apart from the name values (including the ones specified underneath the dependsOn node), be sure to change the authorizationServerId to match the name of the OAuth 2.0 profile created in the previous step.\nBackend Endpoint APIM works on the principle that you specify the URL details, or backend endpoints, of the services that you wish to interact with via the API itself. When creating this through the Azure Portal, we have the option of selecting either a Logic App or Azure Function app as our targeted resource, with Azure handling all of the appropriate configurations for this on our behalf. To mimic this within an Azure RM template, therefore, we have to crack open a few template functions to build out the appropriate URL\u0026rsquo;s to reference back to the Logic App we wish to expose via APIM. With this in mind, the following example demonstrates this in practice:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/backends\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/mylogicapp\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;mylogicapp\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Backend Endpoint for Logic App\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;Logic App\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;[substring(listCallbackUrl(resourceId(resourceGroup().name, \u0026#39;Microsoft.Logic/workflows/triggers\u0026#39;, \u0026#39;mylogicapp\u0026#39;, \u0026#39;manual\u0026#39;), \u0026#39;2017-07-01\u0026#39;).basePath,0,add(10,indexOf(listCallbackUrl(resourceId(resourceGroup().name, \u0026#39;Microsoft.Logic/workflows/triggers\u0026#39;, \u0026#39;mylogicapp\u0026#39;, \u0026#39;manual\u0026#39;), \u0026#39;2017-07-01\u0026#39;).basePath,\u0026#39;/triggers/\u0026#39;)))]\u0026#34;, \u0026#34;protocol\u0026#34;: \u0026#34;http\u0026#34;, \u0026#34;resourceId\u0026#34;: \u0026#34;[concat(\u0026#39;https://management.azure.com/subscriptions/\u0026#39;, subscription().subscriptionId, \u0026#39;/resourceGroups/\u0026#39;, resourceGroup().name, \u0026#39;/providers/Microsoft.Logic/workflows/mylogicapp\u0026#39;)]\u0026#34; } } Named Value Next, we must provide a description - a named value - that describes the operation we are performing against the backend API. In this case, our Logic App exposes a PATCH operation that updates a Common Data Service Account record, so we use a name that reflects this:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/namedValues\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/nv\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Named Value for Logic App\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;patch_account\u0026#34;, \u0026#34;tags\u0026#34;: [], \u0026#34;secret\u0026#34;: true, \u0026#34;value\u0026#34;: \u0026#34;/\u0026#34; } } Operation In addition to the Named Value, it is also necessary to specify the actual operation that callers will perform against each endpoint. Azure then references this operation back to the named value resource created earlier. In this case, we want our PATCH operations to target /account at the end of our APIM default URL; therefore, we use the following template snippet to achieve this:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/apis/operations\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/myapi/patch_account\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/apis\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;My API\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Operations for Logic App\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Accounts\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;PATCH\u0026#34;, \u0026#34;urlTemplate\u0026#34;: \u0026#34;/account\u0026#34;, \u0026#34;templateParameters\u0026#34;: [], \u0026#34;responses\u0026#34;: [] } } API Policy Configuration Within APIM, it is possible to enforce certain policies at the API level, which APIM will then obey for all requests that pass through. For example, you forcibly add, remove or modify query parameters passed through by the caller. It\u0026rsquo;s worth reading through the whole Microsoft Docs article on this very subject, so you can get a feel for what is possible. For the scenario we are working through, considering that we wish to utilise OAuth 2.0 authentication for our API, it makes sense to enforce this at the API level. Also, when receiving requests back from our backend Logic App endpoint, a variety of different header values are returned to the caller. These are more for diagnosis/debugging purposes and, ideally, should be hidden from sight. To achieve all of this within APIM, we build out an XML definition for these policies, which we can then apply as a separate resource for APIM using the following example:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/apis/policies\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/myapi/policy\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/apis\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;myapi\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Policy configuration for APIM\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;\u0026lt;!--\\r\\n IMPORTANT:\\r\\n - Policy elements can appear only within the \u0026lt;inbound\u0026gt;, \u0026lt;outbound\u0026gt;, \u0026lt;backend\u0026gt; section elements.\\r\\n - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the \u0026lt;inbound\u0026gt; section element.\\r\\n - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the \u0026lt;outbound\u0026gt; section element.\\r\\n - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar.\\r\\n - To remove a policy, delete the corresponding policy statement from the policy document.\\r\\n - Position the \u0026lt;base\u0026gt; element within a section element to inherit all policies from the corresponding section element in the enclosing scope.\\r\\n - Remove the \u0026lt;base\u0026gt; element to prevent inheriting policies from the corresponding section element in the enclosing scope.\\r\\n - Policies are applied in the order of their appearance, from the top down.\\r\\n - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope.\\r\\n--\u0026gt;\\r\\n\u0026lt;!--\\r\\n Inbound policies:\\r\\n - Enforce OAuth2 authorization\\r\\n - Remove OAuth2 Authorization Bearer before passing request to Logic App\\r\\n Outbound policies:\\r\\n - Remove all x headers returned by the Logic App\\r\\n--\u0026gt;\\r\\n\u0026lt;policies\u0026gt;\\r\\n \u0026lt;inbound\u0026gt;\\r\\n \u0026lt;validate-jwt header-name=\\\u0026#34;Authorization\\\u0026#34; failed-validation-httpcode=\\\u0026#34;401\\\u0026#34; failed-validation-error-message=\\\u0026#34;Unauthorized. Access token is missing or invalid.\\\u0026#34;\u0026gt;\\r\\n \u0026lt;openid-config url=\\\u0026#34;https://login.microsoftonline.com/c77948f0-6777-43af-8daf-6b36a1911ee0/.well-known/openid-configuration \\\u0026#34; /\u0026gt;\\r\\n \u0026lt;required-claims\u0026gt;\\r\\n \u0026lt;claim name=\\\u0026#34;aud\\\u0026#34;\u0026gt;\\r\\n \u0026lt;value\u0026gt;api://fb10fac9-3aa4-41e3-be2e-5fad73beca13\u0026lt;/value\u0026gt;\\r\\n \u0026lt;/claim\u0026gt;\\r\\n \u0026lt;/required-claims\u0026gt;\\r\\n \u0026lt;/validate-jwt\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;Authorization\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;/inbound\u0026gt;\\r\\n \u0026lt;backend\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;/backend\u0026gt;\\r\\n \u0026lt;outbound\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-tracking-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-request-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-workflow-run-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-correlation-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-client-tracking-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-trigger-history-name\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-execution-location\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-workflow-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-workflow-version\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-workflow-name\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-workflow-system-id\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-ratelimit-burst-remaining-workflow-writes\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-ratelimit-remaining-workflow-download-contentsize\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-ratelimit-time-remaining-directapirequests\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-ratelimit-remaining-workflow-upload-contentsize\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header name=\\\u0026#34;x-ms-ratelimit-burst-remaining-workflow-reads\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;/outbound\u0026gt;\\r\\n \u0026lt;on-error\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;/on-error\u0026gt;\\r\\n\u0026lt;/policies\u0026gt;\u0026#34;, \u0026#34;format\u0026#34;: \u0026#34;xml\u0026#34; } } The XML in this example is a little difficult to read, so I\u0026rsquo;ve included it below so you can see what its doing:\n\u0026lt;!-- IMPORTANT: - Policy elements can appear only within the \u0026lt;inbound\u0026gt;, \u0026lt;outbound\u0026gt;, \u0026lt;backend\u0026gt; section elements. - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the \u0026lt;inbound\u0026gt; section element. - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the \u0026lt;outbound\u0026gt; section element. - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar. - To remove a policy, delete the corresponding policy statement from the policy document. - Position the \u0026lt;base\u0026gt; element within a section element to inherit all policies from the corresponding section element in the enclosing scope. - Remove the \u0026lt;base\u0026gt; element to prevent inheriting policies from the corresponding section element in the enclosing scope. - Policies are applied in the order of their appearance, from the top down. - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope. --\u0026gt; \u0026lt;!-- Inbound policies: - Enforce OAuth2 authorization - Remove OAuth2 Authorization Bearer before passing request to Logic App Outbound policies: - Remove all x headers returned by the Logic App --\u0026gt; \u0026lt;policies\u0026gt; \u0026lt;inbound\u0026gt; \u0026lt;validate-jwt header-name=\u0026#34;Authorization\u0026#34; failed-validation-httpcode=\u0026#34;401\u0026#34; failed-validation-error-message=\u0026#34;Unauthorized. Access token is missing or invalid.\u0026#34;\u0026gt; \u0026lt;openid-config url=\u0026#34;https://login.microsoftonline.com/c77948f0-6777-43af-8daf-6b36a1911ee0/.well-known/openid-configuration\u0026#34; /\u0026gt; \u0026lt;required-claims\u0026gt; \u0026lt;claim name=\u0026#34;aud\u0026#34;\u0026gt; \u0026lt;value\u0026gt;api://fb10fac9-3aa4-41e3-be2e-5fad73beca13\u0026lt;/value\u0026gt; \u0026lt;/claim\u0026gt; \u0026lt;/required-claims\u0026gt; \u0026lt;/validate-jwt\u0026gt; \u0026lt;set-header name=\u0026#34;Authorization\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;/inbound\u0026gt; \u0026lt;backend\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;/backend\u0026gt; \u0026lt;outbound\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-tracking-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-request-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-workflow-run-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-correlation-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-client-tracking-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-trigger-history-name\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-execution-location\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-workflow-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-workflow-version\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-workflow-name\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-workflow-system-id\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-ratelimit-burst-remaining-workflow-writes\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-ratelimit-remaining-workflow-download-contentsize\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-ratelimit-time-remaining-directapirequests\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-ratelimit-remaining-workflow-upload-contentsize\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;set-header name=\u0026#34;x-ms-ratelimit-burst-remaining-workflow-reads\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;/outbound\u0026gt; \u0026lt;on-error\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;/on-error\u0026gt; \u0026lt;/policies\u0026gt; Signature Property We\u0026rsquo;re on the home stretch now! 😅 To prevent malicious users from connecting to Logic App endpoints and causing all manner of mischief, Microsoft requires that all requests contain a shared access signature within the URL when calling the endpoint directly. If this does not exist, then you are politely and firmly told to go away. APIM is no exception in this regard, so we must ensure to append this value to each backend request made to Logic App, provided that the user authenticates via OAuth 2.0 in the first instance. The signature value can be stored a secret property value within APIM and referenced elsewhere, so this is what we do in this instance:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/properties\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-01-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/la_sig\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Signature property for Logic App\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;la_sig\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;[listCallbackURL(concat(resourceId(\u0026#39;Microsoft.Logic/workflows\u0026#39;, \u0026#39;mylogicapp\u0026#39;), \u0026#39;/triggers/manual\u0026#39;), \u0026#39;2017-07-01\u0026#39;).queries.sig]\u0026#34;, \u0026#34;tags\u0026#34;: [], \u0026#34;secret\u0026#34;: true } } Logic App Policy Configuration We\u0026rsquo;ve already created a policy for everything that sits underneath our API, but now we need to create a specific policy that applies to our PATCH endpoint. This policy achieves the function already alluded to in the previous section; namely, in ensuring that APIM appends the shared access signature onto the Logic Apps URL when making connections to its endpoint. In this case, we must use some slightly different policy settings to enforce this behaviour, which you can view below:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.ApiManagement/service/apis/operations/policies\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2019-12-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;myapim/myapi/patch_account/policy\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/apis/operations\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;myapi\u0026#39;, \u0026#39;patch_account\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/apis\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;myapi\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service\u0026#39;, \u0026#39;myapim\u0026#39;)]\u0026#34;, \u0026#34;[resourceId(\u0026#39;Microsoft.ApiManagement/service/properties\u0026#39;, \u0026#39;myapim\u0026#39;, \u0026#39;la_sig\u0026#39;)]\u0026#34; ], \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;Policy configuration for Logic App\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;\u0026lt;!--\\r\\n IMPORTANT:\\r\\n - Policy elements can appear only within the \u0026lt;inbound\u0026gt;, \u0026lt;outbound\u0026gt;, \u0026lt;backend\u0026gt; section elements.\\r\\n - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the \u0026lt;inbound\u0026gt; section element.\\r\\n - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the \u0026lt;outbound\u0026gt; section element.\\r\\n - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar.\\r\\n - To remove a policy, delete the corresponding policy statement from the policy document.\\r\\n - Position the \u0026lt;base\u0026gt; element within a section element to inherit all policies from the corresponding section element in the enclosing scope.\\r\\n - Remove the \u0026lt;base\u0026gt; element to prevent inheriting policies from the corresponding section element in the enclosing scope.\\r\\n - Policies are applied in the order of their appearance, from the top down.\\r\\n - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope.\\r\\n--\u0026gt;\\r\\n\u0026lt;policies\u0026gt;\\r\\n \u0026lt;inbound\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;set-backend-service id=\\\u0026#34;apim-generated-policy\\\u0026#34; backend-id=\\\u0026#34;LogicApp_mylogicapp\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-method id=\\\u0026#34;apim-generated-policy\\\u0026#34;\u0026gt;PATCH\u0026lt;/set-method\u0026gt;\\r\\n \u0026lt;rewrite-uri id=\\\u0026#34;apim-generated-policy\\\u0026#34; template=\\\u0026#34;/manual/paths/invoke/?api-version=2016-06-01\u0026amp;amp;sp=/triggers/manual/run\u0026amp;amp;sv=1.0\u0026amp;amp;sig={{la_sig}}\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;set-header id=\\\u0026#34;apim-generated-policy\\\u0026#34; name=\\\u0026#34;Ocp-Apim-Subscription-Key\\\u0026#34; exists-action=\\\u0026#34;delete\\\u0026#34; /\u0026gt;\\r\\n \u0026lt;/inbound\u0026gt;\\r\\n \u0026lt;backend\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;/backend\u0026gt;\\r\\n \u0026lt;outbound\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;/outbound\u0026gt;\\r\\n \u0026lt;on-error\u0026gt;\\r\\n \u0026lt;base /\u0026gt;\\r\\n \u0026lt;/on-error\u0026gt;\\r\\n\u0026lt;/policies\u0026gt;\u0026#34;, \u0026#34;format\u0026#34;: \u0026#34;xml\u0026#34; } } Again, here\u0026rsquo;s the raw XML of the definition so you can see what\u0026rsquo;s going on:\n\u0026lt;!-- IMPORTANT: - Policy elements can appear only within the \u0026lt;inbound\u0026gt;, \u0026lt;outbound\u0026gt;, \u0026lt;backend\u0026gt; section elements. - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the \u0026lt;inbound\u0026gt; section element. - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the \u0026lt;outbound\u0026gt; section element. - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar. - To remove a policy, delete the corresponding policy statement from the policy document. - Position the \u0026lt;base\u0026gt; element within a section element to inherit all policies from the corresponding section element in the enclosing scope. - Remove the \u0026lt;base\u0026gt; element to prevent inheriting policies from the corresponding section element in the enclosing scope. - Policies are applied in the order of their appearance, from the top down. - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope. --\u0026gt; \u0026lt;policies\u0026gt; \u0026lt;inbound\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;set-backend-service id=\u0026#34;apim-generated-policy\u0026#34; backend-id=\u0026#34;LogicApp_mylogicapp\u0026#34; /\u0026gt; \u0026lt;set-method id=\u0026#34;apim-generated-policy\u0026#34;\u0026gt;PATCH\u0026lt;/set-method\u0026gt; \u0026lt;rewrite-uri id=\u0026#34;apim-generated-policy\u0026#34; template=\u0026#34;/manual/paths/invoke/?api-version=2016-06-01\u0026amp;amp;sp=/triggers/manual/run\u0026amp;amp;sv=1.0\u0026amp;amp;sig={{la_sig}}\u0026#34; /\u0026gt; \u0026lt;set-header id=\u0026#34;apim-generated-policy\u0026#34; name=\u0026#34;Ocp-Apim-Subscription-Key\u0026#34; exists-action=\u0026#34;delete\u0026#34; /\u0026gt; \u0026lt;/inbound\u0026gt; \u0026lt;backend\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;/backend\u0026gt; \u0026lt;outbound\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;/outbound\u0026gt; \u0026lt;on-error\u0026gt; \u0026lt;base /\u0026gt; \u0026lt;/on-error\u0026gt; \u0026lt;/policies\u0026gt; And with that, our template is complete - deploying all of these resources out will provide us with a fully functioning API endpoint that connects up to our Logic App and which also enforces OAuth 2.0 authentication for all requests - nice!\nGetting your head around building your first Azure API Management resource manager template can be a little challenging. Hopefully, the examples shown in this blog post will help speed you along. If you have any questions or issues relating to this subject, give me a shout 🙂\n","date":"2020-05-10T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/building-resource-manager-templates-for-azure-api-management-logic-apps/","title":"Building Resource Manager Templates for Azure API Management \u0026 Logic Apps"},{"content":"OAuth 2.0 authentication can transform into a very deep rabbit hole if you find yourself stumbling into it for the first time. Perhaps this is for a good reason. As the primary authentication mechanism for the vast majority of cloud services available today, there is an obvious expectation that the security aspects of this protocol remain top-notch at all times. Otherwise, all manner of insidious and treacherous individuals could end up poking around into systems that they have no right to be within. To ensure things remain as secure as possible, there is, therefore, a natural expectation that the setup aspects for legitimate access into systems leveraging OAuth2 be not necessarily a cakewalk.\nBecause OAuth2 is the backbone of Azure Active Directory, which is used to handle authentication into Dynamics 365 online / the Common Data Service, developers will have to cross the Rubicon with OAuth 2.0 at some stage. This action will become even more necessary if you are developing solutions that require a non-interactive login mechanism into the system. Traditionally, developers would have constructed a request similar to the below to generate an access token, which acts as the \u0026ldquo;key\u0026rdquo; into your particular service of choice; in this case, a Dynamics 365 online instance:\nRequest\nPOST https://login.microsoftonline.com/f38b4010-a9e0-40a8-af5e-626cfb3d85be/oauth2/token HTTP/1.1 --Headers removed for brevity grant_type=client_credentials \u0026amp;client_id=a05645a6-aa66-4441-b243-525563c1df2d \u0026amp;client_secret=myclientsecret \u0026amp;resource=https%3A%2F%2Fmycrminstance.crm11.dynamics.com%2F Response\nHTTP/1.1 200 OK --Headers removed for brevity { \u0026#34;token_type\u0026#34;:\u0026#34;Bearer\u0026#34;, \u0026#34;expires_in\u0026#34;:\u0026#34;3599\u0026#34;, \u0026#34;ext_expires_in\u0026#34;:\u0026#34;3599\u0026#34;, \u0026#34;expires_on\u0026#34;:\u0026#34;1588493859\u0026#34;, \u0026#34;not_before\u0026#34;:\u0026#34;1588489959\u0026#34;, \u0026#34;resource\u0026#34;:\u0026#34;https://mycrminstance.crm11.dynamics.com/\u0026#34;, \u0026#34;access_token\u0026#34;:\u0026#34;myaccesstoken\u0026#34; } With our \u0026ldquo;key\u0026rdquo; in hand, we can then walk up and open the door into Dynamics 365 using a request similar to the below, to return details of the user making the request. The key portion here is the Authorization header value, which must contain the access_token generated in the previous step and be in the format Bearer :\nRequest\nGET https://mycrminstance.crm11.dynamics.com/api/data/v9.1/WhoAmI HTTP/1.1 Authorization: Bearer myaccesstoken --Other headers removed for brevity Response\nHTTP/1.1 200 OK --Headers removed for brevity { \u0026#34;@odata.context\u0026#34;:\u0026#34;https://mycrminstance.crm11.dynamics.com/api/data/v9.1/$metadata#Microsoft.Dynamics.CRM.WhoAmIResponse\u0026#34;, \u0026#34;BusinessUnitId\u0026#34;:\u0026#34;9941c9c6-f71e-ea11-a812-00224801bc51\u0026#34;, \u0026#34;UserId\u0026#34;:\u0026#34;395439d2-d98a-ea11-a811-000d3a7fedbe\u0026#34;, \u0026#34;OrganizationId\u0026#34;:\u0026#34;4827d81f-172e-4d9d-b9b2-6db4e7c12490\u0026#34; } This request all works fine and dandy, as you can see. The main issue with it though is that, currently, it\u0026rsquo;s utilising version 1 of the OData v2.0 token endpoint that Microsoft expose out to developers for use. This issue is not so much of a problem as things stand today and as noted by this convenient comparison post on the subject - indeed, depending on your scenario, it may be impossible for you to migrate currently, as the version 2 endpoints are missing some features. However, I would argue that the writing is on the wall and that the eventual deprecation of this endpoint is more than likely. For new projects, if you can get away with using V2 of the new endpoint to meet your requirements, you should use this wherever possible.\nWith all this laid out, we now get to the crux of the matter, which relates to a recent challenge I had - how to get web requests targeting Dynamics 365 moved across to start using the new v2 endpoints? Unfortunately, simply changing the URL to https://login.microsoftonline.com/f38b4010-a9e0-40a8-af5e-626cfb3d85be/oauth2/v2.0/token comes back with the following error:\n{ \u0026#34;error\u0026#34;:\u0026#34;invalid_request\u0026#34;, \u0026#34;error_description\u0026#34;:\u0026#34;AADSTS901002: The \u0026#39;resource\u0026#39; request parameter is not supported.\\r\\nTrace ID: 506b590a-7c90-4638-8a39-ff35cd9d1800\\r\\nCorrelation ID: 1eb1b320-7802-4983-a46d-e6d20c6ede5e\\r\\nTimestamp: 2020-05-03 07:38:44Z\u0026#34;, \u0026#34;error_codes\u0026#34;:[ 901002 ], \u0026#34;timestamp\u0026#34;:\u0026#34;2020-05-03 07:38:44Z\u0026#34;, \u0026#34;trace_id\u0026#34;:\u0026#34;506b590a-7c90-4638-8a39-ff35cd9d1800\u0026#34;, \u0026#34;correlation_id\u0026#34;:\u0026#34;1eb1b320-7802-4983-a46d-e6d20c6ede5e\u0026#34; } Obliging the error message, by removing the resource key/value pair from our body request, doesn\u0026rsquo;t give us much joy as well, as we get a new error instead:\n\u0026ldquo;error_description\u0026rdquo;: \u0026ldquo;AADSTS90014: The required field \u0026lsquo;scope\u0026rsquo; is missing from the credential. Ensure that you have all the necessary parameters for the login request.\\r\\nTrace ID: afeacc53-92b2-430b-a8a3-bf69af5b1800\\r\\nCorrelation ID: a6b2a96d-d7d6-4929-8402-5a9da9a27335\\r\\nTimestamp: 2020-05-03 07:42:54Z\u0026rdquo;\nA quick read of the documentation tells us that we must supply a scope value that contains the URL of the resource we are trying to access, appended by .default. So our working request must resemble the below:\nPOST https://login.microsoftonline.com/f38b4010-a9e0-40a8-af5e-626cfb3d85be/oauth2/v2.0/token HTTP/1.1 --Headers removed for brevity grant_type=client_credentials \u0026amp;client_id=a05645a6-aa66-4441-b243-525563c1df2d \u0026amp;client_secret=myclientsecret \u0026amp;scope=https%3A%2F%2Fmycrminstance.crm11.dynamics.com%2F.default With this, you will get a token back successfully, and we can now proceed to use this to authenticate into Dynamics 365 / the Common Data Service 🙂\nGetting your head around the OAuth 2.0 \u0026ldquo;Flow\u0026rdquo; for Azure Active Directory can be a challenge, at the best of times. Even if you find yourself equipped with advanced knowledge on the subject, you can expect to spend some time banging your head against the wall, as you attempt to get your access tokens generated for the first time. And, I can also anticipate there will be future issues arising as people look to migrate their applications to use the new V2 endpoints. As this article has hopefully demonstrated, moving across your Dynamics 365 / Common Data Service applications is pretty easy and possible to do today. So what are you waiting for? 😉🤣\n","date":"2020-05-03T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/generating-oauth2-v2-0-endpoint-tokens-for-dynamics-365-the-common-data-service/","title":"Generating OAuth2 V2.0 Endpoint Tokens for Dynamics 365 / the Common Data Service"},{"content":"Activity entities within the Common Data Service / Dynamics 365 have traditionally been\u0026hellip;problematic at the best of times. While an entirely useful construct when working from a functional standpoint in the application, their backend structure can cause some challenges when evaluating more technical requirements. A lot of this comes down to their unique characteristics. For example, all default and custom activity records have a relationship behind the scenes to the activitypointer entity which, in effect, acts as a common table within the backend SQL Server database to record all activity records created in the system. An additional complication also arises when you start associating activities to other, related records in the system. As an example, consider the Appointment entity\u0026rsquo;s Required Attendees and Optional Attendee fields. Users can populate this with potentially many different types of records in the system - a Contact, Account or even a custom entity. Attempting to, therefore, work programmatically with these fields can introduce some additional complexity and make it more challenging to achieve your particular requirement.\nA recent example I was working with highlights well some of the difficulties Activity entities can bring to the table and, in this specific circumstance, the issue involves the Task entity specifically. I was building out a canvas Power App to handle the creation and assignment of tasks into CDS. The first of these was pretty straightforward - I simply built out a collection of Task records within a specific area of the application and then patched each of these iteratively into the Common Data Service database. In essence, the formula I used resembled the below, which in this instance, would create a single Task record within the database:\nPatch(Tasks, Defaults(Tasks), {subject: \u0026ldquo;My New Task\u0026rdquo;, prioritycode: \u0026lsquo;Priority (Tasks)\u0026rsquo;.Normal})\nI\u0026rsquo;m using the logical names of each field, more out of force of habit than anything else. The following formula would also be valid and achieve the same result:\nPatch(Tasks, Defaults(Tasks), {Subject: \u0026ldquo;My New Task\u0026rdquo;, Priority: \u0026lsquo;Priority (Tasks)\u0026rsquo;.Normal});\nSo far, so good. The next stage was to determine whether I could also assign the task on creation to another user. Say, for example, I wanted to assign a Task to me upon record creation:\nPatch(Tasks, Defaults(Tasks), {subject: \u0026ldquo;My New Task\u0026rdquo;, prioritycode: \u0026lsquo;Priority (Tasks)\u0026rsquo;.Normal, ownerid: LookUp(Users, fullname = \u0026ldquo;Joe Griffin\u0026rdquo;)})\nIn this circumstance, we start to hit a problem - for some reason, the Common Data Service connector cannot write to this field and instead produces the following error:\nWhat\u0026rsquo;s most frustrating about this is that this issue seems to have crept in as part of some recent updates to the Common Data Service connector within Power Apps. The benefits of this update have been immense, in allowing us to work more effectively with Option Set fields and polymorphic fields, but it seems to have broken this specific field in the process.\nTo find a solution, so we can successfully assign Tasks to our desired Owner from within the canvas app, we must instead turn to Microsoft Power Automate flows. We have had the longstanding capability to trigger flows at specific action points, so this solution - while not necessarily being the \u0026ldquo;nicest\u0026rdquo; one at our disposal - can be leveraged to help address the requirement. We start by building a flow that resembles the below:\nWe use the Power Apps trigger action and, from there, prompt for two specific pieces of information - the ID of the Task record to update and then the ID for the User we wish to reassign the record to and then the Owner Type field - in this case, we set it to systemusers. In this specific scenario, we only expect Tasks ever to be assigned to a User record, not a Team. You could adjust the Flow to accept an additional parameter, which you can then evaluate to set the Owner Type field to the correct value.\nWith the flow built out, we can then look to add it into our canvas app and then use the following formula to perform the reassignment, where AssignTaskstoCorrectOwner is the name of our Power Automate flow and the parameters fed to the flow contain the generated activityid from the previous step and the ID of the user to re-assign the record to:\nClearCollect(newTask, Patch(Tasks, Defaults(Tasks), {subject: \u0026ldquo;My New Task\u0026rdquo;, prioritycode: \u0026lsquo;Priority (Tasks)\u0026rsquo;.Normal})); ForAll(newTask, AssignTaskstoCorrectOwner.Run(activityid, userID));\nIn this specific example, we are only patching a single Task record. To patch multiple records within a collection called myTasks, you would adjust the first line of the above code as follows:\nClearCollect(newTask, ForAll(myTasks, Patch(Tasks, Defaults(Tasks), {subject: \u0026ldquo;My New Task\u0026rdquo;, prioritycode: \u0026lsquo;Priority (Tasks)\u0026rsquo;.Normal})));\nNow, Power Apps will trigger the above flow each time you create a Task record, to ensure you can reassign the record to its correct Owner 🙂\nIt\u0026rsquo;s a little bit annoying that something that used to work without issue now becomes something that we have to find a workaround to resolve. However, we shouldn\u0026rsquo;t complain really. The improvements made to the Common Data Service connector have been immensely helpful in other areas, allowing non-technical users to work more efficiently with complex data types that have long plagued seasoned Dynamics CRM / 365 developers. It\u0026rsquo;s just a relief that Power Automate can step in to fill the gap and achieve a solution that not only works but is relatively simple to implement too.\n","date":"2020-04-26T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/changing-common-data-service-task-entity-ownership-in-a-canvas-power-app/","title":"Changing Common Data Service Task Entity Ownership in a Canvas Power App"},{"content":"Welcome to post number nine in my series focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In last week\u0026rsquo;s post, we took a look at how JavaScript can be used to extend model-driven app forms. Client-side scripting is not the only potential usage case to trigger custom logic on a form or even override how default controls behave. Developers can implement Power Apps Component Framework (PCF) controls to completely alter how fields, views or other form-based objects operate. And, because they leverage modern web tools/languages, it is easy peasy to get a PCF control deployed out into the application. This feature is the focus of the next exam area we will look at in today\u0026rsquo;s post, with the expectation that candidates must demonstrate knowledge of the following subjects:\nCreate a PowerApps Component Framework (PCF) component\ninitialize a new PCF component configure a PCF component manifest implement the component interfaces package, deploy, and consume the component use Web API device capabilities and other component framework services The development technologies involved as part of PCF components differ significantly from the areas that traditional Dynamics CRM developers will likely know. Therefore, let\u0026rsquo;s jump in to review this in further detail and, similar to last week\u0026rsquo;s post, focus on demonstrating how to deploy a simple PCF control out into the platform.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam. I would also recommend that you have a good understanding of JavaScript, HTML/CSS, PowerShell, and working with Visual Studio before you attempt any of the examples provided in this post.\nPCF Controls Overview Stepping back into the world of Dynamics CRM for a few seconds, and we can see that our options for modifying the out of the box forms (now a central component as part of model-driven apps) are somewhat limited. Although developers have a high degree of control over the general layout of a form, our ability to modify how a text field is displayed remains virtually non-existent. Instead, developers would have to revert to building out HTML Web Resource files and then embedding these within a form instead. No doubt, alongside this, there would be a whole heap of JavaScript interacting with the applications Web API to provide the illusion of a custom interface. And while it is still possible to implement solutions of this nature, it\u0026rsquo;s interesting to note that this entire subject does not even get a single mention as part of the MB-400 exam specification. I believe this should tell you all you need to know and also the potential direction for this feature in future.\nTo help address the traditional needs of Web Resources and, additionally, allow us to overhaul the appearance within model-driven Power Apps completely, Power Apps Component Framework (PCF) controls are now here and ready to be used. They were introduced relatively recently and are currently in general availability for model-driven Power Apps. Because they take advantage of a whole heap of existing tools, developers can quickly bundle together re-usable custom components and deploy them multiple times across a model-driven app. These can not only alter the behaviour of the elements described earlier (fields \u0026amp; views), but also support additional capabilities, such as the ability to access the applications Web API. Some of the advantages they have over traditional Web Resources include:\nThe ability to bundle all aspects of your project together into a single solution file. Because the application renders custom controls at the same time as the form itself, users get a completely seamless experience when using a model-driven app. They are developed using modern web technologies, such as NPM, Node.js and TypeScript. (Eventual) support for all types of Power Apps, both model and canvas. Now, PCF controls are great but are not necessarily the most accessible tool to implement. Developers should evaluate the requirements of the overall solution and, where possible, guide an organisation towards introducing a canvas Power App instead, if that meets the requirements. To best illustrate how to make this decision, let\u0026rsquo;s consider the following two scenarios:\nThe organisation requires a highly tailored area within the Account form to create new Contact\u0026rsquo;s that will then be related to the parent record: Given there are numerous controls to implement, an embedded canvas app is the most straightforward solution here. You\u0026rsquo;re managing a multi-lingual instance and need to ensure that all controls display in the user\u0026rsquo;s chosen language: Although this is possible to do via a canvas app, a PCF control is a more logical approach, given it can hook into the applications localisation settings. The organisation has an existing Node.js/Angular web component on their website that they wish to replicate across into Dynamics 365: In this situation, a PCF control makes logical sense, as we can straightforwardly port this across to the application with minimal effort. Your solution needs to immediately trigger and return feedback from actions triggered via Power Automate flows: As it stands, only canvas apps support this capability. A good Dynamics 365 / Power Platform can build out a PCF control without issue; the sign of an excellent developer is identifying the appropriate usage cases for them, and in not necessarily resorting to them as a first-preference solution, if others can fit the gap.\nTechnology Areas Involved PCF controls are an anomaly, to a degree, when you start looking at the technology areas that make them up. I would forgive you for assuming that they rely solely on .NET focused programming languages. Instead, they utilise some more recently developed languages targeted towards rapid web development. This circumstance can be potential blocker when learning about PCF controls for the first time, but it is a hump that is worth surmounting. The below list covers all of the languages and tools that help bring PCF controls together. It is impossible to provide a detailed overview of the inner workings of each of these as part of a single blog post, so I encourage you to perform separate revision in these areas; what follows, therefore, is a basic overview:\nHTML: HyperText Markup Language (HTML) makes up everything to do with the web - including the very page you are reading now. It provides a structured, open standard, that bears some similarities to eXtensible Markup Language (XML). It will typically make use of other components, such as CSS or JavaScript, to provide enhanced styling or functional capabilities to an individual web page. From a PCF perspective, you would reference and work with common HTML constructs, such as divs, inputs and options, when building out your components. CSS: Whereas HTML can be best thought as the \u0026ldquo;nuts and bolts\u0026rdquo; of the internet, Cascading Style Sheets (CSS) is the bit that makes it all look pretty and beautiful. Conforming to a JavaScript Object Notation (JSON) like structure, it allows you to apply a variety of different style rules to individual components that get loaded onto a webpage. For example, you can define whether a button changes colour when a user hovers their mouse over a control. CSS is used within PCF controls to help style and make your controls \u0026ldquo;pretty\u0026rdquo; for users within the application and may be necessary depending on your requirements. For example, you might need to ensure that your component mirrors an organisations branding guidelines. TypeScript: TypeScript is an open-source language developed by Microsoft, with a lot of similarities to JavaScript. The main benefit it provides, hinted at by its name, is that it is a strongly typed language. This means it has full Intellisense support within Visual Studio/Visual Studio Code, allowing you to identify and fix issues quicker. From a compiler standpoint, all code written ultimately executes as JavaScript on a webpage. All of the logic and core functionality of a PCF control will be contained within one or several different TypeScript files. NPM: Node Package Manager (NPM) is used within PCF controls to retrieve common, pre-built libraries/packages of functionality so that you do not necessarily have to reinvent the wheel when developing your code component for the first time. For example, rather than building a calendar control from scratch, you could instead use FullCalendar by downloading its appropriate NPM packages and then referencing the components you need within TypeScript. A traditional Dynamics CRM Developer will perhaps be most familiar with the first two languages on this list. It is, therefore, essential to spend some time learning about TypeScript and NPM, to ensure that you can effectively build out a practical PCF component. Thankfully, TypeScript is very similar to JavaScript - another language that developers will need to know well for this exam - so I believe developers of this inclination will have little difficulty making the jump across to TypeScript.\nDevelopment Pre-Requisites Before you can start writing a single line of code, you need to ensure you have ticked a few boxes on your machine:\nYou must have either NPM or Node.js installed on your computer. I recommend using NPM, as this includes Node.js as standard. During installation, be sure to tick the option to download additional tools, one of which installs Visual Studio 2017 Build Tools. After this has installed, be sure to go into the Visual Studio 2017 installer and add on the individual component NuGet targets \u0026amp; Build Tasks, an additional pre-requisite. You must also have the .NET Framework 4.6.2 Developer Pack installed on your machine. Finally, a specific CLI for developing PCF controls must be installed. The CLI will provide a range of commands to build, test and deploy your components as you build them out. Further details on all these setup steps can be found on the Microsoft Docs website.\nYou must also make an important decision regarding which integrated development environment (IDE) you wish to use to develop your control - either Visual Studio 2017 (or later) or Visual Studio Code. Although the latter option does require some tinkering, I recommend it over traditional Visual Studio if at all possible, as it provides a far more streamlined development experience. Also, note that you will need to install the .NET Core 3.1 SDK if using Visual Studio Code.\nOnce you have installed all pre-requisite components, you can then initialise and create your first PCF component project. You can follow the steps outlined in this article to get started. Alternatively, check out the video below to see this, and all other steps mentioned so far in this section, in action:\nReviewing the PCF Component Manifest Just as solutions have high-level properties/metadata that summarises your particular piece of functionality, a PCF component has a manifest which defines general properties relating to your component. Comprised of a single file, called ControlManifest.Input.xml, the manifest looks like this when creating a new component targeting a field:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34; ?\u0026gt; \u0026lt;manifest\u0026gt; \u0026lt;control namespace=\u0026#34;jjg\u0026#34; constructor=\u0026#34;MB400Sample\u0026#34; version=\u0026#34;0.0.1\u0026#34; display-name-key=\u0026#34;MB400Sample\u0026#34; description-key=\u0026#34;MB400Sample description\u0026#34; control-type=\u0026#34;standard\u0026#34;\u0026gt; \u0026lt;!-- property node identifies a specific, configurable piece of data that the control expects from CDS --\u0026gt; \u0026lt;property name=\u0026#34;sampleProperty\u0026#34; display-name-key=\u0026#34;Property_Display_Key\u0026#34; description-key=\u0026#34;Property_Desc_Key\u0026#34; of-type=\u0026#34;SingleLine.Text\u0026#34; usage=\u0026#34;bound\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;!-- Property node\u0026#39;s of-type attribute can be of-type-group attribute. Example: \u0026lt;type-group name=\u0026#34;numbers\u0026#34;\u0026gt; \u0026lt;type\u0026gt;Whole.None\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;Currency\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;FP\u0026lt;/type\u0026gt; \u0026lt;type\u0026gt;Decimal\u0026lt;/type\u0026gt; \u0026lt;/type-group\u0026gt; \u0026lt;property name=\u0026#34;sampleProperty\u0026#34; display-name-key=\u0026#34;Property_Display_Key\u0026#34; description-key=\u0026#34;Property_Desc_Key\u0026#34; of-type-group=\u0026#34;numbers\u0026#34; usage=\u0026#34;bound\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; --\u0026gt; \u0026lt;resources\u0026gt; \u0026lt;code path=\u0026#34;index.ts\u0026#34; order=\u0026#34;1\u0026#34;/\u0026gt; \u0026lt;!-- UNCOMMENT TO ADD MORE RESOURCES \u0026lt;css path=\u0026#34;css/MB400Sample.css\u0026#34; order=\u0026#34;1\u0026#34; /\u0026gt; \u0026lt;resx path=\u0026#34;strings/MB400Sample.1033.resx\u0026#34; version=\u0026#34;1.0.0\u0026#34; /\u0026gt; --\u0026gt; \u0026lt;/resources\u0026gt; \u0026lt;!-- UNCOMMENT TO ENABLE THE SPECIFIED API \u0026lt;feature-usage\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureAudio\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureImage\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.captureVideo\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.getBarcodeValue\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.getCurrentPosition\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Device.pickFile\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;Utility\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;uses-feature name=\u0026#34;WebAPI\u0026#34; required=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/feature-usage\u0026gt; --\u0026gt; \u0026lt;/control\u0026gt; \u0026lt;/manifest\u0026gt; The properties in control node are specified when creating your project for the first time using the pac pcf init command. Typically, the only values in this node you would change is the version, display-name-key and description-name key values. The version value should be updated each time you deploy an updated version of your component.\nThe manifest includes some additional nodes, which are worth reviewing in further detail:\nproperty: Here, you define the various input properties that the component relies on and will be ultimately bound to when utilised. You can only tie PCF controls to a limited range of field types within the application, so it\u0026rsquo;s essential to know the limitations relating to this. For example, at the time of writing this post, it is not possible to bind a control to a lookup, Status or Status Reason field. You have additional options here to specify whether a property requires a value and whether the component should utilise a default value if none is specified. A PCF component can have multiple property nodes defined for it. resources: Within this node, you must list all files that the component relies on to function correctly - whether this is additional HTML, CSS, or even other file types, such as images. Only one TypeScript file can be listed here, but you can have as many of all different file types as your component needs to function correctly. feature-usage: Here, you can enable/disable additional features that your component can utilise within a model-driven app. By default, all of these are disabled, so you will need to uncomment the specific ones you need. Note, in particular, that you must explicitly enable the WebApi for use in your component before you can start calling its appropriate list of methods. Just as you need to spend some time updating the properties of your solution each time your release, you should anticipate working in the manifest often. For the exam, having a good awareness of the manifest schema and how to enable/disable various properties within your code component should be sufficient.\nEnd to End Development Cycle: Building, Debugging and Deploying a PCF Control Microsoft provides a great online tutorial, that talks you through the steps involved to build your very first PCF Control, so there is no point me repeating this verbatim. However, I sometimes find the best way to understand a process is to see it in action. With this in mind, check out the video below, where I will guide you through all the steps involved to deploy out your very first PCF control into a model-driven app. The video will also show you how to debug your code component using the Power Apps Component Framework Test Environment:\nWeb API As mentioned earlier, PCF controls can interact with the Dynamics 365 / Common Data Service Web API to perform platform-level operations. Similar to JavaScript, we do not need to worry about authentication when accessing the Web API in this way, but the list of available methods (at the time of writing this post) is significantly less. The current list of available operations is as follows and covers all basic CRUD operations targeting the application:\ncreateRecord deleteRecord retrieveMultipleRecords retrieveRecord updateRecord This state of affairs means that if, for example, you need to perform operations such as an Execute request, you would have to instead resort to a solution that uses client-side scripting instead.\nComing Soon: PCF Controls for Canvas Apps A final note and, one which Microsoft will (I assume) eventually cover within the MB-400 exam itself, is that PCF controls are supported within canvas Power Apps as well. This capability is currently in public preview but, once released, will allow developers to straightforwardly utilise an existing PCF control within their canvas apps, without needing to rewrite their code. To find out more how to get started with this feature, check out the following Microsoft Docs article.\nFurther Tools / Resources I\u0026rsquo;ve linked to numerous Microsoft Docs articles in this post surrounding PCF control development, and these should always be your first port of call when getting started on this subject. I would also recommend that you check out the following other resources too:\nPCF Gallery: A free community website operated by Guido Preite, this site is an excellent resource for finding free PCF controls, that you can utilise or experiment further with. Dynamics Ninja Blog: Ivan Ficko has blogged, delivered many sessions relating to PCF controls and also has numerous examples that he\u0026rsquo;s created as well. If you\u0026rsquo;re looking to go the next step with your PCF development, he is your man! 🙂 Todd Baginski\u0026rsquo;s YouTube Channel: Todd has also done videos on how to develop a PCF control and debug your components, which I would recommend watching. PCF controls are an exciting new area within Dynamics 365 and the Power Platform. However, they are also a feature that is constantly changing. Be sure to refer to the latest documentation, as Microsoft releases new capabilities for PCF controls all the time. In next week\u0026rsquo;s post, we round off our discussion on how to extend user interfaces, by looking at how to set up a custom command button within a model-driven Power App.\n","date":"2020-04-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-introduction-to-power-apps-component-framework-pcf-controls/","title":"Exam MB-400 Revision Notes: Introduction to Power Apps Component Framework (PCF) Controls"},{"content":"Welcome to the eighth post in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. For those following the series, apologies for the small hiatus. Last time around, we saw how to leverage Business Process Flows within a model-driven app, as part of focusing on the Configure business process automation area of the exam specification. In today\u0026rsquo;s post, we jump across into our first code-focused exam area, as we review ways in which we can Extend the user experience. There are many ways in which we can use custom code (C#, VB.NET and more) to extend Dynamics 365 and the Power Platform. The most common approach, and the focus for this post, is when we look to enhance a model-driven app form using JavaScript or TypeScript. This topic assesses the following skill areas within the exam:\nApply business logic using client scripting\nconfigure supporting components create JavaScript or Typescript code register an event handler use the Web API from client scripting Learning how to write code using JavaScript or Typescript, or even to cover off every single method/function exposed within a model-driven apps Web API, would be impossible as part of an individual blog post. Therefore, we will instead focus on the fundamental aspects unique to Dynamics 365 and the Power Platform, with specific reference towards the steps involved to deploy out a form function successfully. I, therefore, recommend you have a good general awareness of the fundamental principals behind JavaScript or TypeScript before reading this post any further. And, as with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nForm-Side Scripting: Why and When We Need It In an earlier post in the series, we\u0026rsquo;ve discussed the usage cases and features available as part of Business Rules. It\u0026rsquo;s important to reference back to this for two reasons. First of all (and did you know), underneath the hood, Business Rules implement many of the form-side scripting features that are available to developers using JavaScript or TypeScript. Therefore, much of the functionality they are capable of achieving - showing/hiding fields, changing their business requirement etc. - are also available to developers writing form scripts. Secondly, you should always fully utilise and exhaust the capabilities of Business Rules, before contemplating writing a single line of code. They provide you with a fully supported and far more straightforward way of accomodating basic requirements relating to form presentation and automating basic tasks.\nAmazing though they are, there will be occasions where a Business Rule is just not going to cut it. Consider the following requirements:\nYou need to dynamically change the display label of a field, based on whether a particular value exists on the current record. The general structure of the form needs to be modified, depending on what type of form is loaded (i.e. a new record form, a read-only form, etc.). You need to execute a Web API query to obtain details from another record in the system. When a user saves a record, you need to perform some additional validation and, if required, prevent the save action from occurring and present an error to the user. For these \u0026ldquo;advanced\u0026rdquo; scenarios, it\u0026rsquo;s impossible to utilise a Business Rule effectively to meet the requirements, meaning we must instead resort to writing custom code. Again, and I cannot stress this enough, your typical development workflow when evaluating what the business is asking for is first to review and confirm, without a shadow of a doubt, that you cannot address the requirement via a Business Rule; once you have done this, you then have my (and indeed Microsoft\u0026rsquo;s) blessing to start typing code 🙂\nWhat is the Web API? To help with automating key operations when working within the application, and also for communicating into the application from an external system, Microsoft provides us with an OData version 4 compliant endpoint, through which developers can execute a variety of HTTP requests against. As well as exposing key CRUD (Create, Read, Update and Delete) operations, developers can also use the Web API to execute batch operations, impersonate another user or call functions or actions. Developers can use any language of their choice to interact with the Web API when being called externally from the application. An example of a request to create a new Account record, provided courtesy of Microsoft, can be seen below:\nPOST [Organization URI]/api/data/v9.0/accounts HTTP/1.1 Content-Type: application/json; charset=utf-8 OData-MaxVersion: 4.0 OData-Version: 4.0 Accept: application/json { \u0026#34;name\u0026#34;: \u0026#34;Sample Account\u0026#34;, \u0026#34;creditonhold\u0026#34;: false, \u0026#34;address1_latitude\u0026#34;: 47.639583, \u0026#34;description\u0026#34;: \u0026#34;This is the description of the sample account\u0026#34;, \u0026#34;revenue\u0026#34;: 5000000, \u0026#34;accountcategorycode\u0026#34;: 1 } Typically, a developer will use tools such as Postman when building out their sample requests, as this provides some useful options to ease you along.\nWithin the context of developing client-side scripts, Microsoft provides a shorthand mechanism of working with the Web API to carry out everyday functions. Although the methods exposed here are not extensive compared with dealing with the Web API directly, developers do not need to worry about authentication when working with the Web API in this way. Therefore, you should arguably be able to accommodate most requirements using Xrm.WebApi.\nPutting together these types of requests can be tedious and take some time to build each time. Fortunately, there is a great community tool available from Jason Lattimer, called the CRM REST Builder. The tool provides a graphical interface which you can use to build your code snippets each time and then test them within the browser. You can see a screenshot of it below, where I\u0026rsquo;ve built out a sample request to query some Contact records and how the resulting code snippet looks when generated:\nXrm.WebApi.online.retrieveMultipleRecords(\u0026#34;contact\u0026#34;, \u0026#34;?$select=fullname\u0026amp;$filter=fullname ne null\u0026amp;$orderby=fullname asc\u0026#34;).then( function success(results) { for (var i = 0; i \u0026lt; results.entities.length; i++) { var fullname = results.entities[i][\u0026#34;fullname\u0026#34;]; } }, function(error) { Xrm.Utility.alertDialog(error.message); } ); For the exam, having a good awareness of the various operations you can perform against the Web API, the format of requests, how responses back are formatted and, finally, how to write OData queries targeting the Web API endpoint will hold you in good stead.\nexecutionContext: Attention Dynamics CRM Developers! Those with a previous background developing for on-premise Dynamics CRM deployments should take particular note here. In earlier versions of this application, developers would be most familiar working with the various Xrm methods to perform common actions. For example, the following form function would previously allow you to change the display labels on a composite address control, using the Xrm.Page.getControl method:\nfunction changeAddressLabels() { Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } In early 2020, Microsoft announced that the Xrm methods, including the one referenced above, are now deprecated. Also, attempting to use these methods within the new Unified Interface (UI) model-driven apps will likely cause errors. To get around this, developers can now take advantage of the Client API form context object, accessible from within any form. By using this, therefore, we can rewrite the above code to something that will be fully supported moving forward:\nfunction changeAddressLabels(executionContext) { //Get formContext var formContext = executionContext.getFormContext(); //Check to see if the control is on the form and, if so, rename it accordingly. if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;))\tformContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } Microsoft has not yet announced a timeline for eventually removing the Xrm methods permanently from the application. Still, you should not be using it as part of any new projects or work moving forward.\nExposed Event Handlers Understanding the various event handlers we can \u0026ldquo;hook\u0026rdquo; into when developing client-side scripts is crucial. Event handlers summarise a particular action a user carries out against a form - saving a record, changing the value of a field or opening/expanding a form tab. As a user carries out each of these actions, developers can then execute their desired logic within their code. At the time of writing this post, the following event handlers are made available for use:\nOnChange: Each attribute (field) that exists on the form supports this event handler, which triggers as soon as a user changes a field value and clicks on another control on the form. Although it\u0026rsquo;s possible to call a function with your logic directly using the OnChange event, Microsoft advises that you make use of the addOnChange or removeOnChange methods to add or remove form functions respectively. OnLoad: Rather self-explanatory perhaps, but as soon as a form or its data has fully loaded for a user, you can then call function(s) to perform your desired logic. It\u0026rsquo;s worth noting that are technically two separate OnLoad events - one that fires as soon as the form itself loads and a second once all underlying data loads for the first time or refreshed/updated. Again, Microsoft provides specific functions, such as formContext.data.addOnLoad(), to allow you to bolt on or remove your logic accordingly. OnSave: Again, no prizes for guessing what this does 🙂 Developers can bolt on their custom logic, using the addOnSave and removeOnSave methods, whenever a user or some custom code attempts to save the record. It\u0026rsquo;s worth thoroughly reading and understanding all of the potential situations under which this can occur, including ones automatically determined by the systems auto-save functionality. A robust capability with this event handler is the ability to prevent the save action from completing when a user violates a particular business condition. PreSearch: This event handler is limited to lookup field controls only, and you can only use this in conjunction with a single method - addCustomFilter - to dynamically alter the list of records a user can select. OnResultOpened: Limited for use within the knowledge management area of Dynamics 365, you can use this event handler to execute functions when a user opens a knowledge article from the search box or via a pop-out action. Similar to all other event handlers, you are provided with a function to add and remove functions to execute. OnSelection: Again, this is a specific knowledge management event handler for when a user selects a knowledge base article, with the appropriate methods to add and remove functions too. PostSearch: Finally, and once again, specifically concerning knowledge management capabilities, you can use this event handler to execute custom logic as soon as results return via a knowledge article search. And - you guessed it - the addOnPostSearch and removeOnPostSearch methods allow you to control when and where your custom functions execute. As you can see, all event handlers support the ability to add/remove one or multiple functions, that the platform will then execute accordingly. This should be your preferred mechanism to use at all times.\nDevelopers are free to add as many as 50 event handlers for each event that occurs on a form; however, I would caution any solution that utilises so much custom code on one form. If you find yourself in this situation, then I\u0026rsquo;d encourage you to instead look at other options, such as a canvas Power App or a Power Apps Component Framework (PCF) control. More on this subject in the next post in this series 🙂\nTo find out more about event handlers and how they work, consult the following Microsoft Docs article.\nDeploying a Form Script So knowing how to write JavaScript form functions and having a good awareness of the various event handlers that are exposed gets you pretty much there and ready to start building out your first form script. However, you need to first understand the importance of Web Resources as part of all this. For a long time now, Web Resources have provided developers with the mechanism to deploy out several different types of custom components - whether they be images, HTML files and, as you might expect, JavaScript files. A typical deployment process for a new JavaScript file will involve the following steps:\nNavigate to your target solution within the Power Apps portal. Select New -\u0026gt; Other -\u0026gt; Web Resource. The New Web Resource window will then load within the \u0026ldquo;classic\u0026rdquo; interface. Provide a useful name and description value for the new Web Resource, and ensure the Type is set to Script (JScript). The Text Editor button should appear. After pressing the Text Editor button, type in or copy/paste your JavaScript into the window and press OK. Save and then publish the Web Resource. Although you have now successfully uploaded your JavaScript file into the application, it will not be doing anything at all currently. We must next navigate to the entity form where we would like it to be triggered from and set up the appropriate event handlers that will cause it to fire. Currently, you must do this from within the \u0026ldquo;classic\u0026rdquo; interface, within the Form Properties dialog window:\nUsing the Add button underneath the Form Libraries, we then add on the Web Resource uploaded earlier. Next, we then select the control/event that we wish to attach our event handler and then press the Add button to load the Handler Properties dialog window. Here, we must specify several options:\nThe name of the library, i.e. the Web Resource. The name of the function to call. If the function is consuming the execution context, then the Pass execution context as first parameter option must be ticked. If any additional, static parameter values need specifying for the form function, you can also define these here as a comma-separated list. If we want to enforce dependencies between the function and the fields it relies on, these can also be specified here. Doing so will prevent other users from accidentally removing these fields from the form. After then publishing the form with all the latest changes, the newly created form function will then start triggering when the appropriate event handler occurs - nice!\nCommon Form Functions It is impossible to go into detail regarding every single client-side function that you can utilise. Instead, what I wanted to do was highlight some of the more commonly used ones, that you may find yourself using often. I\u0026rsquo;ve deliberately chosen to exclude any function(s) that can be accomplished via a Business Rule instead, for the reasons I\u0026rsquo;ve already alluded to earlier in this post.\nformContext.data refresh: As well as allowing you to refresh all data currently loaded onto a form, you can also optionally save the current record as part of the same action. For this reason, it is far more versatile and a preferred option when compared with save. formContext.data.entity getEntityReference: Allows you to capture a lookup (array) object, containing details of the currently loaded record. This function is useful if you wish to store details about the current record locally so that you can then populate this as part of a lookup field later on. getId: Returns the Globally Unique Identifier (GUID) value of the currently loaded record. formContext.data.process setActiveProcess: Allows you to change the currently selected Business Process Flow (BPF) to a different one, provided the user has access to it. moveNext / movePrevious: Both of these functions allow you to move the user forwards or backwards on a BPF forcibly formContext.ui getFormType: Returns a value indicating the type of form the user is currently on. For example, you can determine with this whether a user is creating a record, updating an existing one or viewing a record that exists in a read-only state. setFormNotification: Lets you display an informational, warning or error message to a user. Use of this function is generally preferred as opposed to using alert(), particularly given that it comes with some nice options. formContext.ui.formselector: Contains three functions - getId, getLabel and navigate - which, when used in conjunction, allows you to change which form is presented to an end-user dynamically. formContext.ui.process setVisible: Allows you to toggle the visibility of a BPF on a form. formContext.ui.tabs setDisplayState: Allows you to toggle whether a tab is shown or collapsed on the form. formContext.ui.sections setVisible: Using this, you can determine whether a form section remains visible to a user or not. Xrm.WebApi retrieveRecord: In situations where you need to validate information on a related record within the application, you can use this function to return details regarding this record, using an OData system query. To do well in the exam, you need to have a broad understanding of all potential form scripting capabilities, so I would urge you to study the complete list of available functions in greater detail and experiment further with their usage.\nDemo: Deploying a Basic JavaScript Form Function In the video below, we\u0026rsquo;ll take a pre-authored JavaScript form function and demonstrate how this can be deployed out and debugged within the application:\nThe sign of an excellent Dynamics 365 / Power Platform developer is when they use JavaScript / TypeScript form functions appropriately, after exhausting all other available options, such as Business Rules or Power Automate. Take care not to always resort to a code-first solution when building on top of Dynamics 365 or the Power Platform. In next week\u0026rsquo;s post, we\u0026rsquo;ll see how we can use the Power Apps Component Framework (PCF) to further extend our model-driven app forms, in ways that were previously impossible to do in Dynamics CRM.\n","date":"2020-04-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-implementing-client-side-scripting-on-model-driven-power-apps/","title":"Exam MB-400 Revision Notes: Implementing Client-Side Scripting on Model Driven Power Apps"},{"content":"Have you heard of the Microsoft.Xrm.Data.PowerShell module? If you have aspirations involving or are doing work with Dynamics 365/the Common Data Service and DevOps, then the functionality that this provides could speed you along. Utilising the capabilities found within the Dynamics 365 SDK, the module allows you to:\nRetrieve details regarding your Dynamics 365 organisation and the metadata that exists within. Add new language packs to your instance. Enable new email addresses provisioned onto the instance. Create, update, delete or retrieve records. In short, if you need to automate any boring or repetitive tasks, there is bound to be a set of cmdlets within this module that will meet your requirements.\nOne particularly useful feature of the module is its ability to work with solutions and to export out entire solution files in either an unmanaged or managed state. When used in conjunction with the Adoxio.Dynamics.DevOps module and, specifically, the Expand-CrmSolution cmdlet, it\u0026rsquo;s possible to compile together a script similar to the below to export out multiple solutions from a Dynamics 365 environment, and then extract out the raw definition files within the .zip file. The script outlined below can then be easily called as part of an Azure DevOps pipeline or similar, or configured to execute on a build agent of your choosing:\nparam ( [Parameter(Mandatory=$True)][ValidateNotNull()][array]$solutionsName, [string]$username = \u0026#34;johndoe@test.com\u0026#34;, [string]$password = \u0026#34;password123\u0026#34;, [string]$exportLocation = \u0026#34;C:\\DEV\\projectname\u0026#34;, [string]$instance = \u0026#34;https://org1234.crm3.dynamics.com/\u0026#34;, [string]$region = \u0026#34;CAN\u0026#34;, [string]$type = \u0026#34;Office365\u0026#34; ) # Installs package provider and required PS modules, then imports to current session Install-PackageProvider -Name NuGet -Force -Scope CurrentUser Install-Module -Name Microsoft.Xrm.Data.Powershell -Force -Verbose -Scope CurrentUser Install-Module -Name Adoxio.Dynamics.DevOps -Force -Verbose -Scope CurrentUser Import-Module Microsoft.Xrm.Data.Powershell, Adoxio.Dynamics.DevOps #Install SDK Set-Location $exportLocation $sourceNugetExe = \u0026#34;https://dist.nuget.org/win-x86-commandline/latest/nuget.exe\u0026#34; $targetNugetExe = \u0026#34;.\\nuget.exe\u0026#34; Remove-Item .\\Tools -Force -Recurse -ErrorAction Ignore Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe Set-Alias nuget $targetNugetExe -Scope Global -Verbose ## ##Download Plugin Registration Tool ## ./nuget install Microsoft.CrmSdk.XrmTooling.PluginRegistrationTool -O .\\Tools mkdir .\\Tools\\PluginRegistration $prtFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match \u0026#39;Microsoft.CrmSdk.XrmTooling.PluginRegistrationTool.\u0026#39;} Move-Item .\\Tools\\$prtFolder\\tools\\*.* .\\Tools\\PluginRegistration Remove-Item .\\Tools\\$prtFolder -Force -Recurse ## ##Download CoreTools ## ./nuget install Microsoft.CrmSdk.CoreTools -O .\\Tools mkdir .\\Tools\\CoreTools $coreToolsFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match \u0026#39;Microsoft.CrmSdk.CoreTools.\u0026#39;} Move-Item .\\Tools\\$coreToolsFolder\\content\\bin\\coretools\\*.* .\\Tools\\CoreTools Remove-Item .\\Tools\\$coreToolsFolder -Force -Recurse ## ##Download Configuration Migration ## ./nuget install Microsoft.CrmSdk.XrmTooling.ConfigurationMigration.Wpf -O .\\Tools mkdir .\\Tools\\ConfigurationMigration $configMigFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match \u0026#39;Microsoft.CrmSdk.XrmTooling.ConfigurationMigration.Wpf.\u0026#39;} Move-Item .\\Tools\\$configMigFolder\\tools\\*.* .\\Tools\\ConfigurationMigration Remove-Item .\\Tools\\$configMigFolder -Force -Recurse ## ##Download Package Deployer ## ./nuget install Microsoft.CrmSdk.XrmTooling.PackageDeployment.WPF -O .\\Tools mkdir .\\Tools\\PackageDeployment $pdFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match \u0026#39;Microsoft.CrmSdk.XrmTooling.PackageDeployment.Wpf.\u0026#39;} Move-Item .\\Tools\\$pdFolder\\tools\\*.* .\\Tools\\PackageDeployment Remove-Item .\\Tools\\$pdFolder -Force -Recurse ## ##Download Package Deployer PowerShell module ## ./nuget install Microsoft.CrmSdk.XrmTooling.PackageDeployment.PowerShell -O .\\Tools $pdPoshFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match \u0026#39;Microsoft.CrmSdk.XrmTooling.PackageDeployment.PowerShell.\u0026#39;} Move-Item .\\Tools\\$pdPoshFolder\\tools\\*.* .\\Tools\\PackageDeployment.PowerShell Remove-Item .\\Tools\\$pdPoshFolder -Force -Recurse ## ##Remove NuGet.exe ## Remove-Item nuget.exe #Set environment variable for SDK path [Environment]::SetEnvironmentVariable(\u0026#34;CRM_SDK_PATH\u0026#34;, ($exportLocation + \u0026#34;\\Tools\u0026#34;), \u0026#34;User\u0026#34;) # Connect to Dynamics 365/CDS $securePassword = ConvertTo-SecureString $password -AsPlainText -Force $credentials = New-Object System.Management.Automation.PSCredential ($username, $securePassword) $CRMConn = Connect-CrmOnline -credential $credentials -ServerUrl $instance #Iterate through Solution name list, export and then expand each solution in an unmanaged state foreach($solution in $solutionsName) { #Export unmanaged solution Write-Host \u0026#34;Exporting $solution...\u0026#34; Export-CrmSolution $solution $exportLocation -SolutionZipFileName $solution\u0026#34;_Unmanaged.zip\u0026#34; -conn $CRMConn Write-Host \u0026#34;$solution exported successfully!\u0026#34; #Unpack solution Write-Host \u0026#34;Expanding $solution...\u0026#34; Expand-CrmSolution -ZipFile $solution\u0026#34;_Unmanaged.zip\u0026#34; -PackageType Unmanaged -Folder $solution Write-Host \u0026#34;$solution expanded successfully!\u0026#34; } Credit for helping to compile the above script should go to Nick Doelman, who has published an excellent blog post showing you how to achieve a simple application lifecycle management (ALM) process involving Dynamics 365 solutions. I\u0026rsquo;d highly recommend reading this blog post and subscribe if you haven\u0026rsquo;t already. The above script differs slightly, in that it accepts a comma-separated list of solution files to extract, which are then iterated through to export and then expand out the raw solution files. Why do we expand out the raw files? So that Git can more easily detect changes, highlighting, where necessary, when and where an individual component (e.g. the metadata for the Account entity) was modified.\nThe above script will, in most cases, work fine and dandy. But occasionally, you might hit errors like this, especially when working with larger Dynamics 365 solutions:\nExport-CrmSolution : System.Management.Automation.RuntimeException: ************ TimeoutException - ExportSolution : User Defined |=\u0026gt; The request channel timed out while waiting for a reply after 00:02:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.\nBy default, we must contend with a two-minute timeout for all connections into CRM using this module. If exceeded at all, the script will immediately throw the above error. The answer to getting around this should be fairly obvious - increase the timeout when using the Export-CrmSolution or Connect-CrmOnline cmdlets. However, none of these cmdlets contains an appropriate parameter we can call to increase this. Upon further inspection of the list of available cmdlets within the module, the answer becomes apparent when we review details about the Set-CrmConnectionTimeout cmdlet:\nThe solution becomes pretty self-explanatory after reviewing this - add in the following cmdlet before iterating through the list of solutions:\n# Increase timeout for all connections to 20 minutes Set-CrmConnectionTimeout -conn $conn -TimeoutInSeconds 1200 You can adjust the timeout to suit your particular needs, but 20 minutes should be more than sufficient for even the largest solution files.\nAs usual, the Dynamics 365 wider community delivers continual value, with some fantastic tools released for free. The Microsoft.Xrm.Data.PowerShell and Adoxio.Dynamics.DevOps modules surely sit amongst the likes of the XrmToolBox, CRM Rest Builder and more. They are an essential addition to your toolbox when attempting to apply DevOps within Dynamics 365/the Common Data Service.\n","date":"2020-04-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/resolving-powershell-timeoutexceptions-when-using-export-crmsolution-dynamics-365-common-data-service/","title":"Resolving PowerShell TimeoutExceptions when using Export-CrmSolution (Dynamics 365/Common Data Service)"},{"content":"Canvas Power Apps have come a long way since I last seriously looked at them. Although from an interface standpoint, there is not much that\u0026rsquo;s changed, there has been a plethora of new features added, including:\nThe ability to execute expressions/functions when your application loads the first time. The inclusion of enhanced options, designed to detect performance or accessibility issues within apps. Introduction of new controls, such as AI Builder components, the ability to add Power Apps Component Framework (PCF) controls into your app and, last but not least, form controls. The last of these new features will be of particular interest to those more traditionally used to working with Dynamics 365 or model-driven apps, where form customisation is a central task when building out solutions leveraging this application. From a canvas app standpoint, they provide a streamlined mechanism to allow users to quickly enter and submit data back into your desired data source. Where possible, the form automatically detects the characteristics of your data source and models your form accordingly. For example, if your Common Data Service (CDS) field is marked as Business Required, the form will obey this and only allow users to submit data upon receipt of a valid value. What\u0026rsquo;s more, the controls generated within your form will also be rendered to best match your underlying data types, meaning that:\nDate fields will automatically provide a date/time picker control. Lookup, option or multi-option set fields from CDS will render a combo-box, containing a list of all values the user can select. Multi-line or text area single line of text fields from CDS will expand to larger fields, thereby indicating that the attribute supports longer text values. In short, if you need to very quickly provide data entry capabilities within your canvas app, then forms can be a significant boon and speed you along immensely.\nA common requirement when building out a data entry form within a model-driven app is to automatically populate values on a form, based on some kind of logic. You may hear this action commonly referred to as \u0026ldquo;providing default values\u0026rdquo;. The reasons for doing this should be reasonably clear. In essence, we should embrace wholly any activity we can complete to make peoples lives more comfortable when using a system, and this practice fits incredibly well with this objective. It might also be necessary to provide a default value because the CDS system entity always requires it, but not necessarily because your business does. For example, the Bookable Resource entity requires several fields to be populated when creating a record, such as the Display On Schedule Board and Time Zone fields. Rather than worry users, and the business, with having to populate these fields, we can instead choose to provide a default value each time. Within the model-driven app world, we would achieve this type of requirement by using a Business Rule or, on strictly allowed occasions only, JavaScript form functions. Such options do not natively exist within the canvas app world. Furthermore, it is not immediately apparent, based on the documentation available, how you can, for example, set the default value for an Option Set field.\nLet\u0026rsquo;s take a look at an example in practice. Within my CDS environment, I have made some adjustments to the Account entity to convert the Category (accountcategorycode) field to a mandatory one instead:\nNext, when I jump across into Power Apps, I add a form object into my app and link it up to my Account entity, making sure also to set the Default mode setting to New:\nNotice that Power Apps has not automatically added on the Category field when generating the form.\nNext, we add a button onto the app, that submits the form when pressed:\nWith everything ready, we can input a value into the Account Name field and submit the form. However, as you might expect, we get the following error:\nSo naturally, the simple fix is to add the field onto the form. This action would then allow the user to provide a value to it before submitting. But let\u0026rsquo;s assume, as outlined earlier, that we wish to provide a default value for this field each time the form is submitted - in this case, the value Standard. Also, we want to hide this field, to remove any temptation from users to overwrite the default value we specify. First of all, we add the field to our form:\nNext, we need to expand the control within our Tree view and select the appropriate DataCard component:\nThis control has a specific property, Default, which we can use to supply our preferred value. However, the component is locked automatically, meaning we can\u0026rsquo;t make this change. Navigate to the Advanced tab of the control and press the padlock button to unlock all properties:\nNow we can update the Default property. To set the default value correctly, we first need to understand how Option Sets behave within a canvas app, as they work a little differently from standard fields. Rather than, as you may assume, specifying the underlying option set value (i.e. the unique integer field), you instead specify the name of the Option Set field, followed by a period and then the display value you wish to use. So, for example, to access and store the details of the Standard option, we would need to use the following formula:\n\u0026lsquo;Category (Accounts)\u0026rsquo;.Standard\nIf we were to save this within a variable, we can see it has a data type of OptionSetValue, with its appropriate display value rendered for viewing:\nThe field anticipates the user to provide data of this type when completing the form, therefore meaning that the formula to use for the Default property is as indicated above:\nNow, all that remains is to hide the component (by setting its Visible property to false) and et voilà! Your form will now successfully submit each time the button is pressed, with the specified default value carrying through into CDS.\nOut of all the things involving Power Apps and CDS, this is by far the trickiest issue I\u0026rsquo;ve had to grapple with. I hope, therefore, that this post gets you out of a similar jam. While it\u0026rsquo;s undoubtedly preferential that we don\u0026rsquo;t need to use integer values when working with our CDS option set fields, it\u0026rsquo;s not made immediately clear from the outset how Option Sets work when you first get started with canvas apps. However, it\u0026rsquo;s pleasing to ultimately realise how easy it is to achieve a requirement that, by comparison with model-driven apps, does take additional time and effort to implement via a Business Rule. In this, we can see how useful canvas Power Apps can be to power-users, by making it dead easy to do amazing things 🙂\n","date":"2020-03-29T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/setting-default-option-set-values-within-canvas-power-app-forms/","title":"Setting Default Option Set Values within Canvas Power App Forms"},{"content":"Without question, Dynamics 365 / Power Platform developers should always be leveraging Business Rules as opposed to implementing client-side scripting, using JavaScript or TypeScript. I\u0026rsquo;ve gone into detail previously on why this is such a crucial approach to follow, and many of these justifications do (in my humble opinion) stand the test of time well. To summarise, any solution built on top of Dynamics 365 or the Power Platform becomes far more maintainable when you are utilising functional solutions as opposed to ones written in code. This argument becomes even more potent when it is blindingly obvious that a non-technical approach will meet your particular requirements. The sign of an excellent technical developer is in recognising and fully utilising functional tools to the best of your ability, to the extent that you are actively minimising the amount of code you write. Microsoft seems to agree with this approach, which I think is part of the reason why they have weighted the specification for exam MB-400 (AKA the developer\u0026rsquo;s exam) to include areas typically reserved for a functional specialist of the application. Therefore, you should always, as a budding or experienced Dynamics 365 / Power Platform developer, keep yourself fully abreast and utilise all functional aspects of the platform where-ever possible.\nWith all this said and done, there are still occasions where we must, unfortunately, fall back to using JavaScript form functions. This state of affairs can become borderline maddening when you make a naive assumption that a Business Rule can do something but, upon closer inspection, this is not the case. Consider the following scenario: we have a multi-select option set field defined for our Account entity, whose properties resemble the below:\nSo far, so good. Let\u0026rsquo;s now assume we want to auto-hide this field on all of our Account forms, based on a pre-defined condition. We go to create our Business Rule, define our condition and then add on a Set Visibility action. Now we need to look at selecting the field above within the supplied drop-down box; however, in doing so, we notice a glaring omission:\nThe multi-select Option Set field - and indeed, any attribute of this type - is not supported for use alongside Business Rules. A quick Publish all customizations confirms that this is not a quirk of the application, but rather a specific limitation with Business Rules. Furthermore, fields of this type cannot be evaluated as part of a Business Rule condition or used within any other available action. How strange!\nFortunately, where there\u0026rsquo;s a will and a slight knowledge of coding, there\u0026rsquo;s a way. Because we have successfully exhausted the capabilities within Business Rule, we can now safely put together the following JavaScript form function:\nfunction toggleMultiSelectVisibility (executionContext) { //Currently, it is not possible to use Multi-Select Option fields alongside Business Rules //This function, therefore, is used to show/hide the \u0026#39;My Multi-Select Option Set\u0026#39; field var formContext = executionContext.getFormContext(); // get formContext /* The function hides the field in all occassions. It may be necessary to add in an if...else statement to get the functionality you desire e.g. if (formContext.getAttribute(\u0026#39;myfieldname\u0026#39;).getValue() === \u0026#34;value\u0026#34; {\tformContext.getControl(\u0026#34;jjg_testmultiselectoptionset\u0026#34;).setVisible(false); }\telse { formContext.getControl(\u0026#34;jjg_testmultiselectoptionset\u0026#34;).setVisible(true);\t} When accessing/setting multi-select option set fields, they are stored as an array of the underlying integer values selected or to be set e.g. [1, 2, 3, 4] */ formContext.getControl(\u0026#34;jjg_testmultiselectoptionset\u0026#34;).setVisible(false); } No doubt, you will need to tweak this form function to suit your particular requirements. Once ready, add it onto your form as part of the OnLoad or OnChange event of a specific field, and you\u0026rsquo;ll be able to hide the field successfully - hooray!\nIt is rather frustrating that, for a requirement that you think a Business Rule would address, we are let down and have to resort to writing code. Multi-select option set fields are relatively new in the history of Dynamics CRM / Dynamics 365, which might explain why Microsoft have not yet added them into Business Rules as a supported component. At the moment, there is an open idea for this feature, which I would urge you to vote for as well if you think having this functionality would be useful. For now, though, we can be reasonably content that the amount of code involved is minimal and that, most importantly, we have fully exhausted the native capabilities of Dynamics 365 / the Power Platform before even contemplating writing a single line of code 🙂\n","date":"2020-03-22T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/hiding-multi-select-option-set-fields-on-a-model-driven-app-form-power-apps-dynamics-365/","title":"Hiding Multi-Select Option Set Fields on a Model Driven App Form (Power Apps / Dynamics 365)"},{"content":"Canvas app development in Power Apps is pretty fun. Having the ability to not only fine-tune your interface but also connect to a wide variety of data sources makes it an essential tool to turn to these days. Indeed, if you still find yourself working with Access databases, I would push you to at least look at what canvas Power Apps can do for you and your organisation. I\u0026rsquo;d also encourage longstanding Dynamics CRM / Dynamics 365 developers to familiarise themselves with canvas apps although, as the example in this post will demonstrate, attempting to figure out some of their\u0026hellip;unique oddities can sometimes be an exasperating process.\nThere may be occasions within a canvas Power App form control where any newly created record should be displayed back to the user, using the same form control. Unfortunately, by default, it\u0026rsquo;s not possible to quickly implement this behaviour. When, for example, the user presses a button that calls the SubmitForm function, any data entered will be successfully saved into your target system, and the user will see a blank screen instead of the new record. This behaviour occurs because, as outlined in the documentation, the controls FormMode property updates automatically to Edit as opposed to New when you call the SubmitForm function. While this all sounds well and good, unless you have specifically told the form control which record you would like to render, we default instead to the behaviour outlined earlier. Not ideal! ☹\nTo fix this issue, there are a few things we can do, all of which involves the creation of some additional expressions to achieve successfully. It also requires us to become familiar with a specific function type called UpdateContext, which is your gateway into using context variables. These are a particular kind of variable that is scoped to the current screen only and is automatically disposed of when the user navigates away. In the circumstances relating to the current requirement, therefore, context variables make the most logical sense to use. And, when used in conjunction with the form controls LastSubmit property, allows to us capture and re-use details regarding the last successfully submitted record elsewhere on our Power Apps screen. We will use both of these features as part of the first step of the solution.\nNavigate to the OnSuccess property of your form control, and add in the following expression - replace the fm_MyForm portion with the actual name of your form:\nUpdateContext({varLastSubmit: fm_MyForm.LastSubmit});\nNow, we can store details of each successfully created record including, most importantly, its identifier.\nNext, on the same form control, navigate to the Item property and enter the following value:\nvarLastSubmit\nAnd that\u0026rsquo;s it - whenever you now call the SubmitForm button, the newly created record will render correctly, due to the following conditions now being met:\nThe form mode equals Edit An applicable Item record value exists for the form control. Just always make sure that the form mode is changed to New by default whenever a user navigates to the screen containing the form for the first time. Using the following expression(s) will sort this all out for you, navigating the user to the FormScreen screen and updating the forms mode accordingly:\nNavigate(FormScreen);NewForm(fm_MyForm);\nThis particular behaviour is something that, for first time Power Apps developers, can take a little bit of time to grasp fully. Once you fully understand (i.e. RTFM) the behaviour of form controls, you can start to understand why these extra steps are required. Hopefully, if you\u0026rsquo;ve been scratching your head over this particular quirk, then this post will help you along 🙂\n","date":"2020-03-15T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/displaying-newly-created-record-after-saving-in-a-canvas-power-app-form-control/","title":"Displaying Newly Created Record After Saving in a Canvas Power App Form Control"},{"content":"Recently, as our Gold Partner consultancy practice has gone live with enforcing security default policies on Azure Active Directory (AAD), we\u0026rsquo;ve had to grapple with some interesting \u0026ldquo;gotchas\u0026rdquo; that arose. Therefore, I thought it might be useful to do a post where I discuss and highlight some of the issues you might face when you enable security defaults for the first time on your tenant.\nSo, first of all, what are they? With the rise of phishing attacks, the proliferation of breached lists containing, in some cases, millions of user name/passwords and general concerns that a cloud-first IT strategy naturally brings to the table, security becomes a real challenge. Ideally, organisations wish to ensure that they\u0026rsquo;re enforcing a \u0026ldquo;basic\u0026rdquo; set of options for each cloud identity, to reduce the risk that any attack may pose. Typically, the steps involved here would be distinct and laborious - enable and deploy multi-factor authentication (MFA) here, run some scripts to allow a secure mode of authentication over there etc. In short, trying to do the right thing becomes more of a chore and, as expected, is something that never gets addressed adequately.\nThat\u0026rsquo;s why Microsoft has introduced security defaults, a simple option that, once enabled on your AAD tenant, does the following:\nEnforces MFA for every user account on the tenant. Blocks users from using legacy authentication options when accessing Exchange Online, via older versions of Microsoft Outlook or using protocols such as POP3/IMAP. Requires that all actions targeting administrative API\u0026rsquo;s in Office 365 and Azure demand an additional MFA prompt If you are an end-user of Microsoft products, there\u0026rsquo;s a good chance you wouldn\u0026rsquo;t have heard of them before, mainly if you are a new Microsoft customer. If you created your Office 365 / AAD tenant after October 2019, then these policies are already enabled by default. However, if you are a Microsoft Partner transacting within the Cloud Solutions Provider (CSP) programme, then you\u0026rsquo;ve probably already been told to apply these already. As is rightly the case, CSP partners will have full administrative access to their customer tenants so that they can manage services on Microsoft\u0026rsquo;s behalf. It\u0026rsquo;s, therefore, only right and fair that partners take necessary steps to protect customer data and prevent any reputational damage to Microsoft.\nDeployment Teething Issues Typically with any IT change, regardless of how innocuous it may seem, it\u0026rsquo;s always essential to test this in a separate environment before potentially disrupting any colleagues or partners across the business. In perhaps one of the more cavalier moments in my IT career, I enabled the policy on our tenant and assumed there would be no further issues\u0026hellip;how wrong was I! In our case, we had to contend with two problems:\nMFA was not already enabled across the tenant, meaning that users immediately received prompts to set it up when logging into Office 365. All Outlook 2016 Desktop clients broke, prompting users to enter a password via a Basic authentication prompt; Outlook would reject this each time. Users were, therefore, unable to send/receive email messages. As such, we had to look at quickly resolving both issues, to ensure colleagues could continue with their work unabated.\nThe Easy Bit: Enabling MFA Anyone who has spent time managing Office 365 should be reasonably comfortable with this process and, if not, there are well-documented steps available to guide you through the process - both from an end-user perspective and for an administrator enabling it for the first time. Where possible, I recommend using the Authenticator app, available on Android and iOS - the main benefit of it being that it supports other MFA providers as well, such as Twitter or GitHub. Ensure that every user on your tenant has completed this step first before enabling security defaults for the first time.\nThe Tricky Bit: Suppressing Basic Authentication Prompts This issue was the one that took considerably longer to figure out. We initially tried a few things to resolve this:\nRestarting the computer (this is an IT problem, after all, so you never know 🙂 ) Removing the existing account details from Windows Authentication. Enabling a specific registry key to force Modern Authentication within Outlook 2016. Finally, the only thing that worked for us was to enable Modern Authentication on our Exchange Online tenant via the following PowerShell script, as outlined in the following Docs article:\nSet-OrganizationConfig -OAuth2ClientProfileEnabled $true (Note, as part of this, it was necessary to download the Exchange Online Remote PowerShell Module, given that MFA was now in place. If you are also using Edge Chromium, you might need to enable ClickOnce Support by navigating to this link in your browser and setting the correct option: edge://flags/#edge-click-once)\nThen, we had to complete the following steps on each machine, as otherwise the Basic Authentication prompts kept appearing:\nClose down Outlook 2016 completely. Navigate to the Control Panel -\u0026gt; Mail (Microsoft Outlook 2016). Select Data Files. Depending on the number of mailboxes involved, you may see one or several in the list that appears. Select each one, press the Open File Location\u0026hellip; button, and then take a copy of each .ost file. Rename the copied file to append a _BACKUP to the end of its file name. Click Close on the Account Settings window. Select Show Profiles. There may be multiple profiles defined in the list. Select each one and press the Remove button. When a warning box appears, press Yes to dismiss it. Press OK to close the window. Open Outlook 2016 again. When prompted, enter the username for each email account that needs adding back on. You should now receive a \u0026ldquo;modern\u0026rdquo; authentication prompt (i.e. the Office 365 login window) as opposed to the Basic prompt. Once authentication completes successfully, your email account should download again without further issue. Keep or remove the backup files taken in step 4, depending on your preference. When You Lose, Don\u0026rsquo;t Lose the Lesson The above quote from the Dalai Lama seems particularly apt for this situation. The critical lesson here and - one which I should not have forgotten - is to always test your changes before applying them to any core business system. This simple step would have avoided much head-scratching and disruption to people in the process. Hopefully, if you\u0026rsquo;re in the same boat and considering enabling this policy on your tenant, this post gives you a flavour of what to prepare for. That way, you can ensure that your change request goes a hell of a lot better than mine\u0026hellip; 😳🤣\n","date":"2020-03-08T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/enabling-security-defaults-on-azure-active-directory-things-to-consider/","title":"Enabling Security Defaults on Azure Active Directory: Things to Consider"},{"content":"When you\u0026rsquo;re looking at any sort of IT integration project, that involves getting two separate systems \u0026ldquo;talking\u0026rdquo; to each other, a common challenge to address is how to achieve this as securely as possible. The traditional approach often involves the implementation of a dedicated service account, with a distinct user name and password. You would then grant this the correct level of privileges to perform its core functions - such as full Create, Read, Update and Delete (CRUD) permissions within a specific area of the system. In the world of Dynamics 365 / the Common Data Service (CDS), we have a unique user account type that has, traditionally, been well suited to these purposes - non-interactive user accounts.\nThe Problem with Non-Interactive User Accounts Accounts of this type, in some respects, meet the needs outlined at the start of this post well. However, they do have a few issues:\nYou are limited to a maximum of seven non-interactive user accounts for each online instance of Dynamics 365 / CDS. It\u0026rsquo;s not inconceivable that, for specific organisations, this limit could be breached, due to the number of potential integrations involved. The setup process for these accounts is ridiculously convoluted, involving many stages within both Office 365 and the system itself. In some cases, it may even be impossible for a Dynamics 365 / CDS administrator to complete all of them in a single sitting. The setup process is also contingent on you having the necessary licenses available, to ensure the new account lands in your system in the first place. These days, I would argue that any integration that involves the retaining of a user account and password is one that could be open to abuse; even if you have appropriately scoped the privileges of the new non-interactive user account. Fortunately, thanks to some of the more recent developments across the platform, there is a new, potentially better, way of doing things. This solution will enable you to not only authenticate into Dynamics 365 / CDS, but also ensure that the setup steps involved remain condensed and, most importantly, familiar enough for any administrator of the system to work with quickly.\nIntroducing Application User Accounts Application user accounts work on a similar basis to non-interactive user accounts but rely instead on Server to Server (S2S) authentication for applications that leverage Microsoft Azure Active Directory (AAD) behind the scenes. Administrators create a dedicated account, that your application impersonates as part of any requests made into Dynamics 365 / CDS. From a setup perspective, you need only then specify a unique Application ID and secret value; your application will then execute all operations targeting Dynamics 365 / CDS in the context of this dedicated account. Application user accounts address some of the concerns highlighted earlier with non-interactive user accounts:\nThere is no limit to the number of application users you can have within a Dynamics 365 / CDS instance. Management and creation of these account types do not involve the whole license assignment/removal process in Office 365 à la non-interactive user accounts. However, their setup does require some configuration in Microsoft Azure (more on this shortly). While application user accounts involve the storing of a sensitive key/value pair, in much the same manner as a user name/password, they\u0026rsquo;re ultimately a more preferential option. This is because you can configure them to expire after a set period. Also, you can view detailed information for all authentication requests made within Microsoft Azure (provided you have a paid license). Therefore, I would argue, this makes them the lesser of two evils compared with non-interactive user accounts. To get started with using application user accounts, it will be necessary first to jump into Microsoft Azure to carry out some pre-requisite setup.\nSetting up the App Registration After navigating to the Azure portal, go to Azure Active Directory -\u0026gt; App Registrations and click New Registration. Provide a descriptive name for the new registration (ideally indicating its business function area) and ensure that the Accounts in this organizational directory only ( only - Single Tenant) option is selected. The redirect URI will depend on your specific scenario but, in most costs, populating a value of http://localhost should be sufficient. You can refer to the following Microsoft Docs article for further assistance/guidance on how to set this all up.\nOnce provisioned, you then need to grant the App Registration the appropriate privileges to connect to Dynamics 365 / CDS. Steps 6 onwards of the following walkthrough will show you how to do this. If configured correctly and, most importantly, granted admin consent, the permissions should resemble the below screenshot:\nNext, you will need to generate a client secret value for the App Registration. Again, the Microsoft Docs site provides excellent instructions on how to do this. I\u0026rsquo;d recommend setting an expiry for any secret value. Finally, before closing down the Azure portal, make a note of the Application (client) ID, which should be a globally unique identifier (GUID) value.\nAccount Configuration in Dynamics 365 Now we need to navigate to the Users area of Dynamics 365 and, with the Application Users view selected, press the New button. Instead of navigating you across to the Office 365 portal, you should instead see a form that resembles the below:\nPopulate the form with these details and press Save \u0026amp; Close to create the new application user:\nUser Name: This should be a unique account name, that ties across to a domain on your Office 365 tenant. For example, you could use myapplicationuser@mytenant.onmicrosoft.com or myapplicationuser@contoso.com. Application ID: Enter the Application (client) ID value recorded from Azure earlier. Full Name: Provide a descriptive First / Last name value for the account, to make it clear what its purpose is. Primary Value: The value in this field should match the User Name field. Using your new Application User Account The next steps from here will depend on your particular application or integration requirement but, as a minimum, you will need both the Application (client) ID and secret values generated from the Azure portal earlier. For example, within Azure Data Factory, you would authenticate using the AAD Service Principal type, and supply these values as indicated below:\nClosing Thoughts Application user accounts are, I would argue, the preferred way of authenticating into Dynamics 365 / the CDS when you have already tightly integrated your application alongside Microsoft Azure. The setup steps involved are trivial in comparison to non-interactive user accounts and, ultimately, provide a more exceptional range of options to manage and curtail access, should the need arise. Hopefully, this post has provided enough detail for you to go away and start working with them yourself. 🙂\n","date":"2020-03-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/an-overview-of-dynamics-365-common-data-service-application-user-accounts/","title":"An Overview of Dynamics 365 / Common Data Service Application User Accounts"},{"content":"I\u0026rsquo;ve blogged somewhat frequently in the past about using Azure Data Factory (ADF) V2 alongside Dynamics 365 / the Common Data Service (CDS). The main focus with these posts has been in pointing out the \u0026ldquo;gotchas\u0026rdquo; to consider when using it. For example, we\u0026rsquo;ve reviewed the list of data type limitations when importing data into Dynamics 365 / CDS. ADF has also been the focus of several talks I delivered last year and, at the risk of making a plug too many, one that I will also present next week at the Scottish Summit (can\u0026rsquo;t wait!). Anyway, to get back on topic\u0026hellip;it is enough to say that ADF, while an incredibly powerful and \u0026ldquo;better\u0026rdquo; tool compared to the likes of SSIS, is one that you need to understand the limitations for before you actively embrace it as a solution.\nAn example of a current limitation is no (native) way to execute or work with the Dynamics 365 / CDS Web API in any meaningful way. For example, if you wanted to run a Bulk Delete job after importing records into the application\u0026rsquo;s database, you have no way of quickly triggering this within your pipeline activity or data flow. That\u0026rsquo;s not to say that it is entirely impossible to achieve and, with a bit of effort, setup and careful consideration of any related security concerns, it is possible to work with the Web API within any stage of an ADF pipeline. This circumstance, therefore, allows you to extend ADF further to leverage additional functionality within Dynamics 365 / CDS Web API and, more crucially, carry out actions such as Delete requests. In this post today, I\u0026rsquo;ll show you how you can go about setting up your pipelines to achieve this requirement, reviewing all of the required setup needed and also assessing the potential security implications of using the solution in the first place.\nPre-Requisites The solution outlined in this post relies on you first creating an Azure Active Directory app registration, with the relevant permissions to impersonate user accounts within Dynamics 365 / CDS. Also, it will be necessary to grant admin consent for these privileges. The screenshot below illustrates how these permissions should look if configured correctly:\nAt this point, make a note of the Application (client) ID and Directory (tenant) ID values on the Overview tab. You will also need to generate a secret value, from the Certificates \u0026amp; secrets tab.\nNext, you need to create a specific type of user account within Dynamics 365 / CDS - an application user. Note that, for those familiar with working with non-interactive user accounts, these are not the same thing and, thankfully, the configuration involves considerably less effort to implement. The setup steps in this article outline what\u0026rsquo;s involved, in far better detail than I could ever hope to replicate. Once set up, your new application user account should resemble the below:\nMake sure this account has been granted a security role with sufficient privilege to authenticate to the application and carry out any required tasks.\nNow we can jump into ADF and create a pipeline. As the final piece in the setup/configuration, define the following string variables at the pipeline level:\nTenantID: This needs to contain the Directory (tenant) ID noted down earlier. AuthRequestBody: To generate the access token for authentication, it will be necessary to provide some information to the OAuth2 endpoint so that it can reply with the correct information. The value in this field will be a concatenation of the following values: client_id: grant_type: The value of this should always be client_credentials redirect_uri: This should match the Redirect URL defined within the AAD App Registration. In this case, we will use the value https://adf.azure.com resource: This is the URL for your Dynamics 365 / CDS instance, e.g. https://mycrm.crm11.dynamics.com client_secret: The secret value generated earlier. It will also be necessary to encode the string, given that it contains some illegal characters within the URL portions. Therefore, an example of how this string should look is as follows: client_id=8366439c-a94d-4a67-962c-0a9758405675\u0026amp;grant_type=client_credentials\u0026amp;redirect_uri=https%3A%2F%2Fadf.azure.com%2F\u0026amp;resource=https%3A%2F%2Fmycrm.crm11.dynamics.com%2F\u0026amp;client_secret=mysecretvalue D365URL: This should contain the URL for your Dynamics 365 / CDS instance, e.g. https://mycrm.crm11.dynamics.com/ The raw JSON definition for your pipeline should resemble the below at this stage:\n{ \u0026#34;name\u0026#34;: \u0026#34;D365WebAPITest\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;activities\u0026#34;: [], \u0026#34;variables\u0026#34;: { \u0026#34;TenantID\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;a113d749-5571-47f4-9441-ad04a8ccbd08\u0026#34; }, \u0026#34;AuthRequestBody\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;client_id=8366439c-a94d-4a67-962c-0a9758405675\u0026amp;grant_type=client_credentials\u0026amp;redirect_uri=https%3A%2F%2Fadf.azure.com%2F\u0026amp;resource=https%3A%2F%2Fmycrm.crm11.dynamics.com%2F\u0026amp;client_secret=mysecretvalue\u0026#34; }, \u0026#34;D365URL\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;https://mycrm.crm11.dynamics.com/\u0026#34; } }, \u0026#34;annotations\u0026#34;: [] }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/pipelines\u0026#34; } Generating the Access Token Connections made to the Dynamics 365 / CDS Web API must always contain a valid access token as part of the Authorization header. The application will refuse any request that does not contain this token, as you may expect. Therefore, we must make a separate request to the OAuth2 endpoint for the AAD tenant in question to obtain this value. Within your ADF pipeline, add a Web activity, rename it to GetD365AccessToken and populate the Settings tab with the following details:\nURL: This should be populated with a dynamic expression value, that builds the correct URL from the TenantID variable specified earlier. The code for this is as follows: @concat(\u0026lsquo;https://login.microsoftonline.com/', variables(\u0026lsquo;TenantID\u0026rsquo;), \u0026lsquo;/oauth2/token\u0026rsquo;) Method: POST Headers: A single header value should be supplied, with the following key/value pair: Content-Type: application/x-www-form-urlencoded Body: This should contain the value from the AuthRequestBody variable specified earlier, meaning the formula for this should resemble the following: @variables(\u0026lsquo;AuthRequestBody\u0026rsquo;) The screenshot / JSON definitions below should indicate whether these have been added correctly or not:\n{ \u0026#34;name\u0026#34;: \u0026#34;GetD365AccessToken\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;WebActivity\u0026#34;, \u0026#34;policy\u0026#34;: { \u0026#34;timeout\u0026#34;: \u0026#34;7.00:00:00\u0026#34;, \u0026#34;retry\u0026#34;: 0, \u0026#34;retryIntervalInSeconds\u0026#34;: 30, \u0026#34;secureOutput\u0026#34;: false, \u0026#34;secureInput\u0026#34;: false }, \u0026#34;typeProperties\u0026#34;: { \u0026#34;url\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(\u0026#39;https://login.microsoftonline.com/\u0026#39;, variables(\u0026#39;TenantID\u0026#39;), \u0026#39;/oauth2/token\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/x-www-form-urlencoded\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@variables(\u0026#39;AuthRequestBody\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; }, \u0026#34;linkedServices\u0026#34;: [], \u0026#34;datasets\u0026#34;: [] } } Making the Request With the ability to generate and now pass through an access token, we have everything we need to build out a second, Web activity to perform an example request. In this situation, we will perform a simple WhoAmIRequest to return the details of the user accessing the Web API; which, in this case, would be the ADF Service Account created earlier. Create a new activity of this type, drag the green arrow from the previously created activity to the new one and then configure the new Web activity with the following properties:\nURL: This needs to contain a concatenated version of the URL variable supplied earlier, alongside the appropriate URL path to perform the WhoAmIRequest. Again, you can use the following expression to achieve this: @concat(variables(\u0026lsquo;D365URL\u0026rsquo;), \u0026lsquo;api/data/v9.1/WhoAmI\u0026rsquo;) Method: GET Headers: Because ADF does not support access token authorisation as part of the available list of authentication options, we must instead pass the access token as a Header value. Configure a single header, with the following key/value pair, for this activity: Authorization: @concat(\u0026lsquo;Bearer \u0026lsquo;, activity(\u0026lsquo;GetD365AccessToken\u0026rsquo;).output.access_token) The settings screen and JSON definition for your pipeline should reflect the below if done correctly:\n{ \u0026#34;name\u0026#34;: \u0026#34;WhoAmIRequest\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;WebActivity\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ { \u0026#34;activity\u0026#34;: \u0026#34;GetD365AccessToken\u0026#34;, \u0026#34;dependencyConditions\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] } ], \u0026#34;policy\u0026#34;: { \u0026#34;timeout\u0026#34;: \u0026#34;7.00:00:00\u0026#34;, \u0026#34;retry\u0026#34;: 0, \u0026#34;retryIntervalInSeconds\u0026#34;: 30, \u0026#34;secureOutput\u0026#34;: false, \u0026#34;secureInput\u0026#34;: false }, \u0026#34;userProperties\u0026#34;: [], \u0026#34;typeProperties\u0026#34;: { \u0026#34;url\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(variables(\u0026#39;D365URL\u0026#39;), \u0026#39;api/data/v9.1/WhoAmI\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Authorization\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(\u0026#39;Bearer \u0026#39;, activity(\u0026#39;GetD365AccessToken\u0026#39;).output.access_token)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; } }, \u0026#34;linkedServices\u0026#34;: [], \u0026#34;datasets\u0026#34;: [] } } Seeing it all in action With everything configured, you can now run the Debug command and verify that each activity completes and, most importantly, we get the details back from the WhoAmIRequest:\n{ \u0026#34;@odata.context\u0026#34;: \u0026#34;https://mycrm.crm11.dynamics.com/api/data/v9.1/$metadata#Microsoft.Dynamics.CRM.WhoAmIResponse\u0026#34;, \u0026#34;BusinessUnitId\u0026#34;: \u0026#34;9941c9c6-f71e-ea11-a812-00224801bc51\u0026#34;, \u0026#34;UserId\u0026#34;: \u0026#34;fa36d220-e748-ea11-a812-000d3a0bad17\u0026#34;, \u0026#34;OrganizationId\u0026#34;: \u0026#34;4827d81f-172e-4d9d-b9b2-6db4e7c12490\u0026#34;, \u0026#34;ADFWebActivityResponseHeaders\u0026#34;: { ... }, \u0026#34;effectiveIntegrationRuntime\u0026#34;: \u0026#34;DefaultIntegrationRuntime (UK South)\u0026#34;, \u0026#34;executionDuration\u0026#34;: 0, \u0026#34;durationInQueue\u0026#34;: { \u0026#34;integrationRuntimeQueue\u0026#34;: 1 }, \u0026#34;billingReference\u0026#34;: { \u0026#34;activityType\u0026#34;: \u0026#34;ExternalActivity\u0026#34;, \u0026#34;billableDuration\u0026#34;: { \u0026#34;Managed\u0026#34;: 0.016666666666666668 } } } The full JSON definition for the entire, working version of the pipeline is below:\n{ \u0026#34;name\u0026#34;: \u0026#34;D365WebAPITest\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;activities\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;GetD365AccessToken\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;WebActivity\u0026#34;, \u0026#34;dependsOn\u0026#34;: [], \u0026#34;policy\u0026#34;: { \u0026#34;timeout\u0026#34;: \u0026#34;7.00:00:00\u0026#34;, \u0026#34;retry\u0026#34;: 0, \u0026#34;retryIntervalInSeconds\u0026#34;: 30, \u0026#34;secureOutput\u0026#34;: false, \u0026#34;secureInput\u0026#34;: false }, \u0026#34;userProperties\u0026#34;: [], \u0026#34;typeProperties\u0026#34;: { \u0026#34;url\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(\u0026#39;https://login.microsoftonline.com/\u0026#39;, variables(\u0026#39;TenantID\u0026#39;), \u0026#39;/oauth2/token\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Content-Type\u0026#34;: \u0026#34;application/x-www-form-urlencoded\u0026#34; }, \u0026#34;body\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@variables(\u0026#39;AuthRequestBody\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; } } }, { \u0026#34;name\u0026#34;: \u0026#34;WhoAmIRequest\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;WebActivity\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ { \u0026#34;activity\u0026#34;: \u0026#34;GetD365AccessToken\u0026#34;, \u0026#34;dependencyConditions\u0026#34;: [ \u0026#34;Succeeded\u0026#34; ] } ], \u0026#34;policy\u0026#34;: { \u0026#34;timeout\u0026#34;: \u0026#34;7.00:00:00\u0026#34;, \u0026#34;retry\u0026#34;: 0, \u0026#34;retryIntervalInSeconds\u0026#34;: 30, \u0026#34;secureOutput\u0026#34;: false, \u0026#34;secureInput\u0026#34;: false }, \u0026#34;userProperties\u0026#34;: [], \u0026#34;typeProperties\u0026#34;: { \u0026#34;url\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(variables(\u0026#39;D365URL\u0026#39;), \u0026#39;api/data/v9.1/WhoAmI\u0026#39;)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; }, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;Authorization\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;@concat(\u0026#39;Bearer \u0026#39;, activity(\u0026#39;GetD365AccessToken\u0026#39;).output.access_token)\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Expression\u0026#34; } }, \u0026#34;body\u0026#34;: \u0026#34;\u0026#34; } } ], \u0026#34;variables\u0026#34;: { \u0026#34;TenantID\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;a113d749-5571-47f4-9441-ad04a8ccbd08\u0026#34; }, \u0026#34;AuthRequestBody\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;client_id=8366439c-a94d-4a67-962c-0a9758405675\u0026amp;grant_type=client_credentials\u0026amp;redirect_uri=https%3A%2F%2Fadf.azure.com%2F\u0026amp;resource=https%3A%2F%2Fmycrm.crm11.dynamics.com%2F\u0026amp;client_secret=mysecretvalue\u0026#34; }, \u0026#34;D365URL\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;https://mycrm.crm11.dynamics.com/\u0026#34; } }, \u0026#34;annotations\u0026#34;: [] }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/pipelines\u0026#34; } Security It is worth highlighting that aspects of the outlined solution involve the storing of secret values in plain text, which could present a security concern, depending on your requirements and the privileges granted within Dynamics 365 / CDS. In its current form, the configuration outlined in this post is suitable only for non-production development or for testing purposes only; therefore, use it at your own risk. To harden it further, consider implementing an Azure Key Vault resource, that you can then use to retrieve and securely store values within a variable each time the pipeline is run. Regardless of how you store your secret value, you should appropriately scope any privileges granted to the application user in Dynamics 365 / CDS; do not, for example, grant this account System Administrator privileges.\nWith the full ability to interact with the Dynamics 365 / Common Data Service Web API, it becomes possible to extend your Azure Data Factory solution to perform a variety of operations not supported by default. However, be sure to take care and ensure you implement appropriate security hardening before deploying it out into any production environments.\n","date":"2020-02-23T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/interacting-with-the-dynamics-365-common-data-service-web-api-from-azure-data-factory/","title":"Interacting with the Dynamics 365 / Common Data Service Web API from Azure Data Factory"},{"content":"Welcome to the seventh post in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. After a small hiatus to the series, it\u0026rsquo;s great to resume things again this week. Last time around, we saw how to leverage Business Rules within a model-driven app, as part of focusing on the following exam assessment area:\nConfigure Microsoft Flow\nconfigure a Flow configure actions to use CDS connectors develop complex expressions Implement processes\ncreate and configure business process flows create and configure business rules You can class all of the above tools as automation features within Dynamics 365 and the Power Platform, and today\u0026rsquo;s discussion around Business Process Flows aim to demonstrate how powerful this feature is in achieving this objective. We\u0026rsquo;ll do this by discussing the various capabilities available within Business Process Flows, and how you can go about using them with ease.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nBusiness Process Flows Overview In our discussion around model-driven Power Apps, we highlighted that apps of this type are, in effect, data-driven applications. As such, following an approved business process within them become necessary. Achieving this objective ensures you can guide end-users towards the correct outcome for, let\u0026rsquo;s say, a sales process and guarantee the population of accurate data to progress to the next stage. Consider another scenario involving a case management process; a service desk manager needs to ensure that a case proceeds according to any agreed SLA\u0026rsquo;s with an end customer and, where appropriate, they must monitor how long a case has resided within a specific stage. Without the ability to comfortably accommodate these scenarios, it becomes incredibly challenging to meet the expectations of the business and end customers.\nBusiness Process Flow\u0026rsquo;s (or BPF\u0026rsquo;s) aim to address these concerns, by allowing developers to model out and enforce a business process effectively, which you can then apply at the entity record level within a model-driven app. As part of this, we can tailor many aspects of either an existing or new BPF and, also, integrate the tool alongside other features within the Power Platform. By default, when installing one of the 1st party apps from Microsoft (Sales, Service etc.), several out of the box BPF\u0026rsquo;s are installed automatically into your system. For example, the screenshot below illustrates a BPF called the Lead to Opportunity Sales Process within the system, which is associated with the Lead entity:\nUsers have access to the following features when using a BPF, indicated by the numbers above:\nHere, the user can view details about the name of the BPF and also how long the current process has been active for; in this case, for around five months. A BPF is structured by stages, which the user can click on to view the further details required to proceed to the next step. In this case, having expanded the Qualify stage, the user is prompted to provide additional information, such as Estimated Budget and details of any existing Contact or Account record. Users can expand any active/inactive stage to review the details required. The stage which is coloured is the current, active stage in the process; which, in this case, would be the Qualify stage. We can also see the amount of time the BPF has resided in this particular stage. Users have several options underneath the Process dropdown field on the ribbon. You can choose to switch another, available process or even abandon the current BPF entirely. Abandoned BPF\u0026rsquo;s will be marked clearly within the application, as indicated in the screenshot below: The arrows at either end of the BPF allow you to toggle the current, focused stage. Note that they do not move the process to the next step, but will enable you to preview the details needed as part of the next/previous stage. Some other, useful features regarding BPF\u0026rsquo;s are worth highlighting at this stage:\nBPF\u0026rsquo;s can span multiple entities if required. In the example shown earlier, the BPF is designed to \u0026ldquo;crossover\u0026rdquo; from the Lead to the Opportunity entity, as you progress through each stage. Because BPF\u0026rsquo;s integrate alongside security roles, it is possible to dictate which process applies for a subset of users within the application. Most system entities are enabled for use alongside BPF\u0026rsquo;s, with some exceptions. Consult the documentation for further details. There are no restrictions on their usage for custom entities. BPF\u0026rsquo;s have some specific limitations, namely: An entity can only have ten active BPF\u0026rsquo;s at any one time. Consider deactivating any BPF\u0026rsquo;s that are no longer in use, should you hit this limit. A BPF is limited to a maximum of 30 stages. Although BPF\u0026rsquo;s have full multi-entity support, you are limited to using a maximum of 5. All in all, BPF\u0026rsquo;s are incredibly easy to use and, as we will see next, we can also create them with startling ease too.\nBusiness Process Flow Designer Similar to Business Rules and the model-driven app designer, we have an interactive editor available to create or modify a BPF, illustrated below:\nAgain, I have numbered each relevant section and provided a description below regarding its purpose:\nExpanding the arrow here will allow you to modify the name and description of the BPF, alongside details regarding some of its fundamental properties (owner, Primary Entity etc.) The icons on here let you quickly add on an additional component to your BPF or perform standard operations, such as cut, copy, paste and delete. On these, it is worth highlighting that the familiar keyboard shortcuts for each action will also let you perform it as well. Finally, you can use the Snapshot button to download a .png image file of the BPF, which you can then include as part of any documentation or training materials. From here (in order left to right), you can save your BPF, validate its structure, save a new copy of it, activate it, modify its display order for users, grant/deny access to it or access some of the available help articles for Dynamics 365. Expanding the ellipses will also allow you to share an email link for the BPF, show any associated dependencies or view specific properties relating to it. These options let you zoom in/out and also adjust the view to fit the canvas of the visual editor. As discussed earlier, a BPF can contain several stages, represented like this on the canvas view. By default, a new BPF will always include a single stage. Clicking on it will allow you to modify its name, category and, for the second stage onwards, its associated entity. A stage can contain two subcategories of additional components, the first of which is\u0026hellip; \u0026hellip;the data steps, or fields to be populated. You can add multiple of these to each stage, and you have the ability also to define their order. The fields that are available for selection must exist on the entity associated with the current stage (i.e. it is not possible to reference related entity fields). Finally, you also can define a name for the step, that can differ from the display name of the field. The next type of component that you can add to a stage is\u0026hellip; \u0026hellip;a workflow step. We\u0026rsquo;ll discuss this component type in further detail shortly. Here, you can see a visual representation of your current BPF. You can hide this window by using the button on the top right. Finally, the right-hand pane allows you to access the Components toolbox or the Properties for the currently selected component. In this example, because we have the 1st stage selected, the Properties pane surfaces details that we can edit relating to it. Developers who are familiar with creating Business Rules should have little difficulty starting to build out a BPF for the first time.\nComponent Overview We\u0026rsquo;ve alluded to this topic already, but it is worth discussing in detail the full list of different component types that can be utilised within a BPF, as outlined below:\nClicking or dragging each component will add it into the appropriate area on the canvas designer. Let\u0026rsquo;s dive in and discuss each one:\nFlow: Stage: Requires no further explanation, I think. 🙂 Condition: With this, you can specify branching rules that will modify how the Process proceeds for the user, based on conditions that you specify. Evaluation of conditions is performed on a similar basis to Business Rules, in that you can select one or multiple fields on the current entity to evaluate, as well as implementing some basic AND/OR evaluation logic. Once a condition is defined, you must then populate details for each subsequent stage, for when the condition evaluates to both true and false. Through the correct use of conditions, you can potentially consolidate several BPF\u0026rsquo;s into one and, ultimately, achieve the same outcome. Composition: Data Step: A concept mentioned earlier, data steps are the fields that the user must populate for a given stage. Workflow: Via this feature, it is possible to automatically trigger a workflow to execute, either when the stage commences or finishes. This feature could be useful if, for example, you would like to send an email out a sales manager after a user successfully qualifies a Lead. Workflows can only be used with a BPF if they have been activated and configured as on-demand, for the same entity that the current stage is targeting. Action Step: Operating on a similar basis to the Workflow component, this allows you to trigger an Action instead. Flow Step (Preview): Finally and - again - similar to the previous two-component types, it is also possible to trigger a Power Automate flow as well within a BPF stage. As a feature that is (at the time of writing) still in preview, it is generally not recommended for use within a production environment. As such, BPF\u0026rsquo;s can be incredibly powerful when integrated alongside tools such as Workflows or Power Automate flows, allowing you to automate substantial aspects of a business process as a user is working through each appropriate stage. The emphasis here, from a developers standpoint, is to utilise these tools wherever possible and only resort to custom code if you are unable to achieve the business requirement natively within the platform.\nTask Flows Although not explicitly mentioned within the MB-400 exam specification (for reasons that will shortly become clear), it\u0026rsquo;s worth briefly touching upon the second type of BPF - a mobile task flow. These are created in the same manner as a BPF, except on creation, you specify the option indicated below:\nAs the name suggests, mobile task flows let you define a process that the user has to follow to completion within the Dynamics 365 mobile application. For example, you could build a Lead capture task flow, that includes all fields required for population by salespeople on a trade event. Traditionally, they provided the only mechanism to include a guided, wizard-type experience within the mobile app.\nMobile task flows are missing from MB-400 specification for a simple reason - they are now deprecated and, as such, will be removed in a future version of the application. To meet the same functionality within the mobile apps, a standard BPF will now provide a fully immersive experience, that can meet the same needs as mobile task flows and more besides.\nThe Hidden Entity The final thing to note with all BPF\u0026rsquo;s is that, upon creation, the system will automatically create a new entity. This entity will have the same name as the BPF in question and, for each active process that users create in the system, the system will create a corresponding record within this entity. The entity has several custom fields that capture a range of useful properties, several of which we can access easily within the application. For example:\nActive Stage (activestageid): The name/details of the current, active stage on the BPF instance. Active Stage Started On (activestagestartedon): The date on which a user selected the current, active stage for the BPF instance. Completed On (completedon): The date on which a user completed the BPF instance. Duration (duration): This field indicates the time between the start and completion date of the BPF instance. Also, the system will create lookup fields to each appropriate entity record associated with the BPF.\nDevelopers can freely utilise this entity as part of any bespoke solution they develop and also, if required, create additional fields, views etc. with no restrictions. I recommended that you always package up any BPF entity with the same solution where your BPF exists so that you can ensure the successful deployment of all applicable customisations.\nDemo: Designing and Interacting with a Business Process Flow To help familiarise yourself with some of the concepts discussed in this post, take a look at the below video, where I will show you how to create a BPF and interact with it as part of a model-driven app:\nWe\u0026rsquo;ve rounded up now our discussion of the functional Dynamics 365 and Power Platform components that you\u0026rsquo;ll need to have an awareness of for the MB-400 exam. In the next post in the series, we\u0026rsquo;re going to take a look at our first code-related topic in the series, as we evaluate client-side scripting options, involving JScript, TypeScript and the Web API.\n","date":"2020-02-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-mapping-a-business-process-with-business-process-flows/","title":"Exam MB-400 Revision Notes: Mapping a Business Process with Business Process Flows"},{"content":"Often, when developing a database as a data-tier application package (more commonly known as DACPAC), you will need to execute regular deployments onto a local SQL Server instance, to verify that you haven\u0026rsquo;t accidentally broken anything when changing your database. Where this can get potentially complicated is when you are also managing security objects (contained user accounts, database roles, etc.) within the DACPAC. Consider the following scenarios:\nYou are deploying your database to an Azure SQL database. You wish to use Azure Active Directory (AAD) authentication into your database, with user accounts mapped to user principal name (UPN\u0026rsquo;s) on the AAD tenant. Attempting to deploy a database with these defined to an on-premise SQL Server database will cause errors, as this feature is only available on Azure SQL databases. You are developing a database that will reside within a different Active Directory (AD) domain than the one your user account currently exists in. Any attempt to create users bound to the other AD tenant will cause the deployment to fail, for pretty much the same reasons as the scenario outlined earlier. In both situations, finding a mechanism to allow you to skip the creation of these objects becomes highly desirable. By doing so, it becomes possible to deploy locally much faster. Then, as part of more centralised build cycles or formal pull requests, deployments can then target infrastructure that has been configured with the correct AD / AAD associations and, therefore, will allow the creation of any object without issue. Of course, there is an argument to be made to completely exclude user account management from within the structure of your DACPAC\u0026hellip;but that\u0026rsquo;s a topic for a whole blog post itself ;)\nThe solution to this problem lies in defining an appropriate publish profile for your DACPAC, which then instructs your chosen deployment tool - SQLPackage.exe, Visual Studio, or Azure DevOps - on how to carry out the deployment. For example, you can force the deployment to error if any potential data loss is detected, remove any objects that exist in target, but not in source, or even specifically exclude objects entirely from being deployed out. The profile is defined as an XML file. It is incredibly useful to use if you plan to automate your SQL database deployments using Azure Pipelines, as you can then re-use this profile or even modify it at runtime. Out of all the available options listed in the earlier example, it is the last set that most interest us in this situation, as this is what allows us to make use of the following properties to disable the deployment of any security-related object to our database:\nExcludeUsers ExcludeLogins ExcludeDatabaseRoles By setting these options to True within our publish profile, creation or modification of these objects will be skipped entirely during any database deployment. An example publish.xml structure, with these settings enabled, can be viewed below:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;Project ToolsVersion=\u0026#34;Current\u0026#34; xmlns=\u0026#34;http://schemas.microsoft.com/developer/msbuild/2003\u0026#34;\u0026gt; \u0026lt;PropertyGroup\u0026gt; \u0026lt;IncludeCompositeObjects\u0026gt;True\u0026lt;/IncludeCompositeObjects\u0026gt; \u0026lt;TargetDatabaseName\u0026gt;MyDatabase\u0026lt;/TargetDatabaseName\u0026gt; \u0026lt;DeployScriptFileName\u0026gt;SQL.MyDatabase.sql\u0026lt;/DeployScriptFileName\u0026gt; \u0026lt;ExcludeUsers\u0026gt;True\u0026lt;/ExcludeUsers\u0026gt; \u0026lt;TargetConnectionString\u0026gt;Data Source=localhost;Integrated Security=True\u0026lt;/TargetConnectionString\u0026gt; \u0026lt;ProfileVersionNumber\u0026gt;1\u0026lt;/ProfileVersionNumber\u0026gt; \u0026lt;ExcludeLogins\u0026gt;True\u0026lt;/ExcludeLogins\u0026gt; \u0026lt;ExcludeDatabaseRoles\u0026gt;True\u0026lt;/ExcludeDatabaseRoles\u0026gt; \u0026lt;/PropertyGroup\u0026gt; \u0026lt;/Project\u0026gt; Simply reference this as part of your chosen DACPAC deployment tool, and everything should work as expected 🙂\nKeeping developers productive is an important concern if you want to ensure they deliver regular business value as part of their daily work. While you can make the argument that the inclusion of user account objects within a database causes more hassle than help, it may still be desirable to ensure these are captured in source and are - therefore - documented as part of the structure of your database. To ensure that you can meet this objective, the use of a publish profile can meet the requirement rather nicely. They not only let developers tailor how to perform their local deployments, so that any blocking elements are excluded entirely, but also become useful in defining an \u0026ldquo;agreed\u0026rdquo; template for how to deploy out a database, depending on the environment being targeted. Enforcing this level of conformance may sound disheartening, but if it ultimately results in more successful builds and regular releases, then that can only be a good thing.\n","date":"2020-02-09T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/using-publish-profiles-to-deploy-a-dacpac-database-without-user-accounts/","title":"Using Publish Profiles to Deploy a DACPAC Database Without User Accounts"},{"content":"A common requirement as part of any cloud IT project may involve surfacing data from on-premise systems into, for example, a cloud Customer Relationship Management (CRM) application such as Dynamics 365. Meeting this requirement is often easier said than done. Often - and quite rightly so - on-premise database systems will be segregated away from any internet-facing endpoints, protected from access via complex firewall rules. While this can serve the organisation well in security audits or as part of a penetration test, it can often cripple any endeavour to start embracing cloud solutions. As such, efforts to adopt exciting new tools like the Power Platform, become virtually impossible, with on-premise data often remaining in their silos and not actively utilised by the business.\nRecognising this need early on, Microsoft has made available several tools that can significantly simplify the process involved in putting on-premise data to work more effectively, in the new cloud-first world. At the time of writing this post, there are three ways in which you can achieve this requirement, all of which involve the installation of an application of some kind within your on-premise network and minimal configuration on your Office 365 / Azure Active Directory (AAD) tenant. This state of affairs is all well and good, but how do you go about figuring out the correct tool to use? What are their benefits and disadvantages? And what are some the things to watch out for when working with them for the first time? Today, I wanted to address some of these questions, as we take a more in-depth look at the features and usage cases behind the following on-premise integration applications - the On-Premise Gateway, the Self-Hosted Integration Runtime and, finally, Hybrid Connections.\nOn-Premise Gateway Overview Perhaps best referred to as the Power Platform gateway, given that it almost exclusively targets the services within this \u0026ldquo;family\u0026rdquo;, this will be the gateway that Power BI developers will have the most familiarity with. Available as an online download, once installed, you can use it to target your on-premise data sources as if they were cloud-based, by merely specifying the connection properties you would use when working with them on-premise. Data is then traversed through the gateway as part of a secure tunnel, often requiring minimal changes to network or firewalls (you can run a network test using the tool at any time to determine whether a change is needed). You can deploy the on-premise gateway can be in one of two ways:\nThe recommended mode, which is selected by default during installation. This deployment mode makes the gateway instantly available to everyone on your Office 365 / AAD tenant and gives you access to the full range of features available. In personal mode, the gateway is scoped and usable only by the account you authenticate with during setup. No other user can interact with the data sources you create, and you are responsible for managing it from within the Power BI portal. This gateway mode is useful for when you need to set up a personal development environment, targeting Power BI development only. I would not recommend to use it for any other purpose, as it can become challenging to maintain and lead to performance issues when deployed out to the user\u0026rsquo;s machine. This mode also does not support Direct Query mode in Power BI. Other than that, there\u0026rsquo;s very little else to say about it, which is perhaps a testament to how easy it is to put in place and manage on an ongoing basis.\nAdvantages Compatible with multiple cloud applications, namely, Power BI, Power Apps, Power Automate, Logic Apps and Azure Analysis Services. Supports multiple data sources and, with some additional configuration, custom connectors as well. Lightweight installation client. Can be managed from Office 365. Disadvantages To use the gateway alongside Logic Apps or Azure Analysis Services, some additional setup is needed, requiring an Azure subscription. Does not support R scripts within Power BI, if deploying the gateway in personal mode. Some data sources are not explicitly supported - you can refer to the following article for further details. Cannot be used by any other Azure service, other than Logic Apps and Azure Analysis Services. Conclusions The on-premise gateway is the natural solution to turn to if your needs are focused solely within the Power Platform range of apps. It provides a fast, scalable and - ultimately - secure mechanism to expose out your on-premise data for multiple purposes. This includes for analysis within Power BI/Azure Analysis Services, utilisation as part of a Power Apps or to help automate a complex business process via Power Automate flows or a Logic Apps. Where the solution begins to lose some of its potential usefulness is if you are wanting to implement a genuine Extract, Transform \u0026amp; Load (ETL) process involving your on-premise data sources. In this situation, the next solution will almost certainly tick your boxes\u0026hellip;\nSelf-Hosted Integration Runtime Overview The self-hosted Integration Runtime (IUR) is a specific application bound tightly to Azure Data Factory (ADF), a solution that you can best think of as the natural, cloud-based evolution of SQL Server Integration Services (SSIS). As well as providing a mechanism for organisations to execute their ADF pipelines within their environment, they also allow you to make connections to any supported on-premise resources that the IUR machine can access. Installed via a similar client to the on-premise gateway, administrators register their self-hosted IUR to a specific ADF resource on Azure; this can then be shared to others on the same subscription if required. Managed in a similar way to the on-premise gateway, from within the ADF interface, they provide the quickest and securest means of processing on-premise data stores as part of your ETL processes.\nAdvantages Lets you manage and scale performance of your ADF pipeline runs. Supports a far greater list of data sources compared to the on-premise gateway - full details can be viewed on this Microsoft Docs article. Easy to install and fully manageable from within Azure Data Factory. For example, you can automatically push updates to your integration runtime without even needing to logon to the machine in question. Multiple IUR\u0026rsquo;s can be installed on a single machine, targeting different resources, and you can also share an IUR across one or several ADF resources. Disadvantages Although possible in practice, Microsoft recommends not to install the IUR on the same machine as an on-premise gateway; in this scenario, it would be necessary to have two separate Windows machines available, which can lead to additional complexity/cost. Does not currently support the ability to execute a data flow on-premise. The IUR is limited to use within a Copy Data Activity only. Cannot be used with other Microsoft Azure / Power Platform services. Conclusions Targeted towards a specific usage case, the self-hosted IUR achieves its purpose remarkably well. It significantly reduces the barrier of adoption for ADF, following the same principals as the on-premise gateway when it comes to its deployment mechanism. As such, it really can be remarkably effective when building out a genuinely cloud-based ETL solution, that can leverage the additional benefits ADF can deliver - a subject which I have hammered on about on the blog plenty of times previously. However, it\u0026rsquo;s greatest strength is also its greatest weakness - as a solution tied so closely together with ADF, it is impossible to use with other Azure or Power Platform resources, such as Logic Apps or Azure Functions. This circumstance limits the runtime in several ways, and also seems somewhat baffling, when I understand that both the on-premise gateway and IUR share almost the same code base. Merging both of these tools in the future would, in my view, help organisations to more readily consider ADF as a possible solution and reduce the complexity of any solution involving elements of the Power Platform alongside ADF.\nHybrid Connections Overview The final type of gateway is technically not a gateway but termed as a connection. Despite this, it still involves the installation of an on-premise application, that you then register to Microsoft Azure in a similar way to the other tools discussed so far. Compatible with Azure App Service and Function apps, they provide a mechanism to access local network resources, targeting any potential environment. For example, a developer can connect to an on-premise SQL Server database using Hybrid Connections. Once configured and exposed, via the appropriate network address and port, the web apps connection string can then reference the local server name and port number as if it were being connected to locally, from the Hybrid Connection Manager machine. Due to its configuration options, developers have a high degree of control over which network resources they can interact with; provided that the appropriate destination and port number is contactable through Hybrid Connection manager, it is a valid endpoint for connections. As such, it represents the most powerful of all the gateway applications discussed so far but designed for meeting very bespoke requirements.\nAdvantages Lightweight and straightforward client installation. Provides highly granular control over the on-premise network resources to expose out. As a network-level solution, it is agnostic towards your chosen language or app technology, thereby increasing its potential usability. As a metered service, you only pay for what you use\u0026hellip; Disadvantages \u0026hellip;but if you are transferring terabytes of data per month, could cost you anywhere in the region of £700+/month to maintain. It is limited in scope to only Azure App Services and Azure Functions. Requires specific configuration to expose the ports/local addresses for access online. Hybrid Connections do not support Windows authentication for data sources, such as SQL Server. Conclusions Hybrid Connections is the gateway of choice for bespoke application developers. Unlike all previous connectors, which work within the confines of existing solutions firmly targeting core Microsoft products, Hybrid Connections don\u0026rsquo;t care what your app is doing or even which language/framework it is written in. Instead, it provides a highly configurable, simplistic means of accessing any on-premise resource, to achieve almost any conceivable task - whether it\u0026rsquo;s updating a database table, obtaining the contents of a local file or firing an HTTP request to an internal web application. However, since it doesn\u0026rsquo;t support Windows authentication, it could prove challenging to implement if you are targeting services such as SQL Server. Also, it\u0026rsquo;s granular approach to configuration could make it challenging to implement, without some trial and error involved.\nFinal Thoughts Hopefully, this post has clarified and dispelled any confusion you may have regarding options for getting your on-premise data working within the Microsoft cloud. As always, you should evaluate and determine the precise nature of your business requirement, and align yourself towards the tool that is going to be the easiest to deploy and maintain. In most cases, the answer to this will be to use the on-premise gateway, given that it is the tool with the highest \u0026ldquo;spread\u0026rdquo; of potential compatible applications. The other connectors are more evidently tailored to situations where a degree of bespoke development is required for your solution and, although equally as straightforward to configure, comes with the additional baggage and technical complexity involved whenever you consider building a bespoke solution.\n","date":"2020-02-02T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/on-premise-in-cloud-reviewing-microsoft-azure-power-platform-gateway-tools/","title":"On-Premise, In-Cloud: Reviewing Microsoft Azure \u0026 Power Platform Gateway Tools"},{"content":"Setting up and managing authentication can sometimes be a bit of a nightmare, even more so when it comes to creating a solution that integrates with an existing system. In these circumstances, the desire will often be to avoid the use of named accounts belonging to individuals or to ensure the configuration of any service accounts involves the granting of the minimum amount of privileges needed to satisfy their function. Additional complications can also arise if you introduce components such as multi-factor authentication into the equation. Services, such as Office 365, allow you to get around this by using a feature known as App Passwords, but the simple presence of this \u0026ldquo;backdoor\u0026rdquo; login mechanism could raise serious information security concerns. Attempting to find the best way forward in this scenario and, ultimately, how to go about setting up a satisfactory (and secure) authentication solution can be a significant challenge.\nSuch concerns recently became apparent to me when I was doing some work involving the SQL Server Integration Services (SSIS) connector for the Common Data Service (CDS) / Dynamics 365 from KingswaySoft. I was encountering issues when testing my connection to the application. Specifically, having followed the instructions on how to configure an OAuth connection and setting up the appropriate Azure Active Directory (AAD) App Registration, the following error message appeared when performing a test connection:\nThe full text of this error is reproduced below:\nKingswaySoft.IntegrationToolkit.DynamicsCrm.WebAPI.WebApiServiceException: The remote server returned an error: (400) Bad Request. (Error Type / Reason: BadRequest, Detailed Message: {\u0026ldquo;error\u0026rdquo;:\u0026ldquo;invalid_grant\u0026rdquo;,\u0026ldquo;error_description\u0026rdquo;:\u0026ldquo;AADSTS65001: The user or administrator has not consented to use the application with ID \u0026lsquo;\u0026rsquo; named \u0026lsquo;\u0026rsquo;. Send an interactive authorization request for this user and resource.\\r\\nTrace ID: \\r\\nCorrelation ID: \\r\\nTimestamp: \u0026rdquo;,\u0026ldquo;error_codes\u0026rdquo;:[65001],\u0026ldquo;timestamp\u0026rdquo;:\u0026quot;\u0026quot;,\u0026ldquo;trace_id\u0026rdquo;:\u0026quot;\u0026quot;,\u0026ldquo;correlation_id\u0026rdquo;:\u0026quot;\u0026quot;,\u0026ldquo;suberror\u0026rdquo;:\u0026ldquo;consent_required\u0026rdquo;}) (SSIS Integration Toolkit for Microsoft Dynamics 365, v11.1.0.7311 - devenv, v16.4.29613.14)System.Net.WebException\nMy initial reading of the error suggested that admin consent had not been granted for the app registration, even though I\u0026rsquo;d already done this. After some further research and, with a full grasping of how OAuth authentication works for CDS / Dynamics 365, the solution begins to become apparent. Although this type of authentication does require the full username and password for a licensed CDS / Dynamics 365 user, Microsoft handles the actual authentication into these services via the App Registration setup on the AAD tenant. This setup explains the need to include an Application ID and Client Secret when authenticating in this manner; after this initial hurdle, connections using the details supplied earlier will then be impersonate__d using the App Registration. All this is well and good but causes a potential security risk, as a single App Registration could inadvertently have full read/write access to your CDS / Dynamics 365 database. Microsoft deal with this by having the user_impersonation permission set that you must grant to your App Registration following its creation, to then allow for this action to take place. The API permission for this resembles the following within the Azure Portal:\nTaking this into account, therefore, we can deal with error AADSTS65001 by assigning the above permission to your app registration. Once complete, make sure to also grant Admin Consent using a global administrator account on the tenant, via the button circled below:\nWith these two actions complete, any test authentication should now complete successfully:\nWith this vital barrier overcome, you can then plough ahead with building out your solution. 🙂\nAs this post hopefully demonstrates, OAuth authentication involving the CDS / Dynamics 365 introduces an additional complexity layer into the deployment of your solution. You need not only some experience working within Microsoft Azure, but also elevated privileges to ensure you can assign any required permissions in the first place. No doubt the technical implementation outlined in this post ticks a few boxes from an information security standpoint. It is nevertheless the potential cause of frustrating blockers when trying to get a new solution put in place and you are used to more simple authentication experience, involving just a user name and password. Keeping in mind the necessary setup steps involved here will allow you to implement and communicate the precise requirements more effectively, whenever you have to deal with this requirement now or in the future.\n","date":"2020-01-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/resolving-aadsts65001-common-data-service-dynamics-365-online-oauth-errors/","title":"Resolving AADSTS65001 Common Data Service / Dynamics 365 Online OAuth Errors"},{"content":"Welcome to post number six in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In last week\u0026rsquo;s post, we looked at how you can use Power Automate as the ultimate workflow and automation tool within the Power Platform. Power Automate flows only scratches the surface at what is available from a process automation standpoint within Dynamics 365 and the Power Platform, a statement which the relevant exam assessment area makes all too obvious:\nConfigure Microsoft Flow\nconfigure a Flow configure actions to use CDS connectors develop complex expressions Implement processes\ncreate and configure business process flows create and configure business rules As we can see, we also have Business Rules, and Business Process Flows included in this category. As components inextricably bound to the Common Data Service and model-driven Power Apps experience, Dynamics 365 / CRM developers will be most familiar with how they work; but that doesn\u0026rsquo;t mean a quick refresher wouldn\u0026rsquo;t go amiss! 🙂 In today\u0026rsquo;s post, we\u0026rsquo;ll first take a look at Business Rules and explain why they should always be a developer\u0026rsquo;s best friend when extending model-driven app forms.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nBusiness Rules Overview In days gone past, when there was a need to carry out more complex, logic-based actions directly on a model-driven Power App form, client-side scripting via JScript form functions would have been the only mechanism to satisfy this requirement. Examples of the types of things I mean by this include setting the value of a form\u0026rsquo;s field, based on when another field changes, showing/hiding fields or even displaying error messages when a user violates a business condition. Given the specialist knowledge area involved, having to bring in someone with required technical knowledge to achieve, what I certainly see at least as, basic requirements, can introduce a degree of complexity and technical/monetary cost into a project or as part of its ongoing maintenance.\nFor this reason, from Dynamics CRM 2013 onwards, Microsoft has provided system customisers to satisfy some of the common scenarios highlighted above, via the use of Business Rules. These offer a built-in and supported mechanism to implement client-level or even platform-level operations when certain conditions are satisfied. They are technically classed as a Process by the application and, like classic Workflows, must always be bound to a single Entity on creation. From an execution standpoint, they almost certainly interact and work with the same client-side API\u0026rsquo;s Microsoft expose to developers writing JScript form functions, but with a reduced risk of causing potential end-user errors when you implement them. They can also negate the need for prolonged testing or factoring in as part of a system upgrade, as they should \u0026ldquo;just work\u0026rdquo; in all circumstances. In short, an effective Dynamics 365 or Power Platform developer should use and push Business Rules to their absolute limit, before even considering writing a single line of JScript. This is a topic I have banged on about evangelised over previously on the blog and it is worth repeating more than ever, particularly in the context of the MB-400 exam.\nBusiness Rule Visual Editor Business Rules are created in the Power Apps portal by navigating to your target Entity and selecting the Business Rules tab. From here, the visual editor will load, represented in the screenshot below:\nThe editor contains many buttons and areas that it\u0026rsquo;s worth getting more familiar with, so you can fully appreciate the capabilities of Business Rules. Each one has been numbered and explained in detail in the list that follows:\nExpanding the arrow here will allow you to define a name (mandatory) and a description for your Business Rule. I highly recommend always to provide a useful description of any Business Rule, to assist any future users of the system or to even give your memory a jog down the road. 🙂 From this area, you can save your Business Rule, validate its current structure to highlight any warnings/errors, define the scope of the Business Rule (discussed in more detail shortly) or access some of the available help articles for Dynamics 365. The icons on here let you quickly add on an additional component to your Business Rule or perform standard operations, such as cut, copy, paste and delete. On these, it is worth highlighting that the familiar keyboard shortcuts for each action will also let you perform it as well. Finally, you can use the Snapshot button to download a .png image file of the Business Rule, which you can then include as part of any documentation or training materials. The central part of the visual editor allows you to select, drag \u0026amp; drop or re-order any added components via the use of your computer mouse. Selecting each component will also display its underlying properties within the area highlighted in 6 on the screenshot. For example, the properties tab for the Lock/Unlock action resembles the following: Here, you can toggle between the Components and Properties tab. The Components tab, when selected, will display a list of all possible components you can add to your Business Rule. The next section will discuss all of these in further detail. Traditionally, Business Rules would be specified and built out using a text-based view, defined within IF\u0026hellip;THEN\u0026hellip;ELSE logic. This experience is persisted as part of the new visual editor experience, to provide a precise and straightforward mechanism for you to validate the logic of your Business Rule. Finally, the options here let you zoom in/out and also adjust the view to fit the canvas of the visual editor. Available Components A Business Rule is made up of one, several or many components, depending on the complexity of the business logic that needs to occur. These are defined under the two categories of Flow and Actions. Within the visual editor, you will see a handy \u0026ldquo;toolbox\u0026rdquo; on the Components tab, that lists everything we can double click/drag and drop onto a Business Rule:\nA description of each of these is found below:\nConditions: As the central component within all Business Rules, at least one of these must exist. From there, you can define multiple rule sets; these represent the specific field on the current entity whose value you wish to evaluate. You have a range of operators at your disposal when performing evaluations, primarily dictated by the data type of the selected field. For example, a text field has operators such as Contains, Equals, Begins with etc. When multiple rule sets are defined, you must also indicate the type of grouping to apply to the evaluation - AND or OR. It is not possible to specify more granular grouping rules underneath this level. Recommendations: In certain situations, you may wish to guide users towards populating a form in a specific manner, based on other inputs that have been made to the record so far. This component type meets this requirement by allowing you to show custom text to the user, alongside a button. Once pressed, the Business Rule will then apply any number of updates to other fields on the form. It is then left up to the user to decide whether they accept the recommendation or to ignore it entirely. Lock/Unlock: With this, you can set or remove a fields read-only property on the form. Show Error Message: By selecting a field and specifying a custom message, it is possible to display any bespoke error message, that will bind itself to the field chosen. Set Field Value: Perhaps one of the most potent components at our disposal, this allows you to auto-populate other fields on the form when our stated conditions are met. You can configure this component in one of three ways, based on the Type value you specify - you can choose to provide a custom value (Value), set the field to match another on the form (Field) or even remove all data from a field (Clear). Set Default Value: This works on a similar premise as the Set Field Value component, with the exception that it does not include a Clear option and is instead designed to populate a field automatically when creating a new record. Set Visibility: Using this component type, you can toggle whether a field appears or is hidden from the user. As highlighted earlier, each component has its own set of distinct properties that you must specify for it to work correctly. It\u0026rsquo;s worth spending some time familiarising yourself with these properties, and also on how they behave on an entity form once published.\nScope An important concept to grasp relating to Business Rules is the circumstances around their application, which is dictated by its Scope. Depending on the value of this property, your Business Rules logic could be executed as expected or not at all. You should, therefore, consider which value to select with this property, to ensure your Business Rule always runs as expected. The list of possible values you can choose for this include:\nSpecific Form: A list of all available forms for the entity will display in the dropdown box, meaning it is possible to scope a Business Rule to a single form only. All Forms: Does exactly what it says on the tin 🙂 Entity: As we\u0026rsquo;ve seen so far, the majority of available components for a Business Rule relate strictly to a form itself. Having a Business Rule with a scope of Entity will ensure that the appropriate action occurs, regardless of whether the user updates the record via a form, Power Automate flow or SDK call. While potentially a powerful mechanism of enforcing business logic across an entire CDS instance, consider carefully the impact this action may have in conjunction with your broader solution. Demo: Creating a Business Rule In the video below, see how it is possible to create a simple Business Rule and how it then behaves within a model-driven app:\nI\u0026rsquo;ll be taking a brief hiatus in this series for the next few weeks, but keep your eyes peeled soon for the next post, where we\u0026rsquo;ll be finishing up our discussion on processes by taking a look at Business Process Flows.\n","date":"2020-01-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-implementing-business-rules/","title":"Exam MB-400 Revision Notes: Implementing Business Rules"},{"content":"Welcome to the fifth post in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. We\u0026rsquo;ve covered a lot of ground in the series so far, finishing up our discussion on model-driven apps and canvas apps over the past fortnight. While these tools are often great at getting your data into the system, they usually are not enough to satisfy a more comprehensive business process. In this scenario, the exam topic area Configure business process automation comes in to play, by covering the essential tools you\u0026rsquo;ll have to work with to ensure your business system adheres to a well-defined process. This exam area assesses candidates in the following topics:\nConfigure Microsoft Flow\nconfigure a Flow configure actions to use CDS connectors develop complex expressions Implement processes\ncreate and configure business process flows create and configure business rules In today\u0026rsquo;s post, we\u0026rsquo;ll look at the most powerful automation tool that we now have at our disposal as Power Platform / Dynamics 365 developers - Power Automate.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nPower Automate Overview Previously known as Microsoft Flow, Power Automate is the logical evolution of the traditional workflow experience within Dynamics CRM and - over time - we can anticipate Power Automate slowly replacing this functionality. As well as affording us near-total feature parity with workflows, Power Automate flows provide us with a modern and highly tailorable means to automate a business process, using the capabilities within Azure Logic Apps to power everything. Through the use of a visual editor, developers can build out an entire business process from start to finish. Also, via the implementation of traditional programming flow controls, it is possible to make incredibly diverse systems talk together in ways that you could not have imagined previously. Some of the benefits that Power Automate provide us include:\nAccess to well-over 315+ connectors to a variety of different data sources and applications, including SQL Server, Salesforce, Twitter, MailChimp and SendGrid, to name a few. The ability to trigger flows based on specific trigger points or schedules. Through the implementation of an on-premise data gateway, data sources not exposed for access across the internet become surfaced for use within Power Automate. Integrates seamlessly with canvas Power Apps, thereby allowing you to trigger a flow based on a specific action point within a canvas app. Robust testing and debugging capabilities. Allowing you to implement simple or complex approval workflows. Full support for use alongside solutions, letting us quickly deploy Power Automate flows out to multiple environments and incorporate flow design as part of your Application Lifecycle Management (ALM) processes. When your solution needs to scale, you can very easily extract your flow and migrate it into Logic Apps, to take advantage of more enterprise-focused features. Power Automate flows, as mentioned, are designed to fill the gap for workflows. But they can go so much further than that and, often, negate the need to look at implementing a plug-in or a custom workflow activities to achieve more complex integrations. Developers should always explore and attempt to use Power Automate flows wherever possible, before resorting to other solutions. Having full awareness of all the available connectors for Power Automate and also on the products specific limitations, from an execution standpoint, will help you best in making this judgement call.\nCreating a Power Automate Flow The Power Automate portal is your go-to destination for working with Power Automate. Accessible from within the Office 365 portal, within here you can:\nManage any approvals assigned to you by other users. View, manage, edit or delete any flows created by you. Manage the various connectors setup for your account or within your organisation. Interact and work with the Common Data Service (CDS) database and solutions. View and utilise existing flow templates, covering common business scenarios. Access and leverage AI Builder capabilities. From a Power Automate flow creation standpoint, the main focus of your time will be within the visual editor, as indicated below:\nThis experience provides a fully immersive mechanism for developing your flows, without needing to worry about installing additional tools onto your machine. After completing and saving your flow for the first time, it will start executing, based on the predefined trigger action chosen for it. You can then do the following with a completed flow:\nShare it to other users within the organisation. Submit it as a template to Microsoft, for inclusion within the template gallery. Export its definition as a .zip file or a Logic Apps template. View analytics relating to it and its execution, e.g. total runs per day, number of errors raised etc. This feature utilises Power BI tiles, providing an intuitive experience when analysing the metrics for your flow. Toggle whether your flow is on or off. View a list of all previous runs for the flow, including the input/output information for each step. In short, whether you need to test, diagnose or figure out whether users in your organisations are still using a flow, there are tools here to help you along.\nPower Automate flows are built up of various core components, all of which you will need to be familiar with for the exam:\nConnectors These are the fundamental components that make Power Automate such a versatile solution, by giving you the ability to connect up to various services or solutions, with a range of corresponding actions then made available for use. Typically, you will need to authenticate with each service using a valid username or password; once created, it is then stored and available to use across multiple flows, if required. Connections will also be shared out to users automatically whenever you do the same for the flow itself. Consider carefully what impact this may have from a permissions or data protection perspective.\nAs mentioned earlier, the list of available connectors is vast and is growing all the time. Note also that, similar to Power Apps, Microsoft classifies some connectors as Premium. Connectors of this type will only be available to you if you have been assigned a paid Power Automate license and Microsoft mark these accordingly within the Power Automate portal:\nDevelopers also can build custom connectors to either use within their current tenant or publish for availability to any Power Automate the world over. This topic, which has its specific exam area for MB-400, will be covered in further detail in a future post.\nIt is worth discussing the CDS connector in further detail at this stage - not only because its a topic for the exam, but also because it may lead to a degree of confusion when you first open Power Automate. The reason for this is that there are two CDS connectors available:\nThe first of these connectors is a solution-independent connector that has the following triggers/actions defined for it:\nThis connector allows you to connect to any CDS tenant, regardless of the Office 365 tenant it resides within. This connector is most useful when your flow will always target a single environment, there is no need to manage it formally within a solution, and you only need to perform basic CRUD operations targeting your CDS database.\nThe second connector - titled Common Data Service (current environment) - is the complete opposite of the above, and should be used when you need to include your flow as part of a solution. Doing so will ensure the flow correctly detects the correct CDS environment to target after your solution has been imported successfully into your new environment. As such, there is a far greater list of available actions for this connector, and a single trigger action is provided that covers all potential scenarios within CDS:\nWherever possible, you should be using the Common Data Service (current environment) connector and managing your flows within a solution. This will significantly simplify the process for rolling flows out into different CDS environments, should the need arise.\nFor the exam, I would recommend that you read up on the standard and also the current environment connector so that you are familiar with the capabilities within each one.\nTriggers In a nutshell, these are the things that start a Power Automate flow. A flow must always have a single trigger, and there are, broadly, three different types available to us:\nAction: These will typically occur based on an event within an application or system - for example, whenever a user creates a new record. Power Automate flow will poll the data source frequently to detect when this action step occurs and then kick off the flow accordingly. In the screenshot below, we can see an example of the CDS trigger for when a new record has been created: Schedule: Flows of this type will run based on a pre-defined schedule. We have a high degree of control over the various settings here, including frequency, interval, time zone and start time. The example screenshot below illustrates a schedule that runs every day at 10 AM GMT: Manual Trigger: Finally, it is possible only to execute a flow only when you need to. As part of this, it is possible to specify different user input parameters, which can then be further leveraged within the flow to modify its behaviour. The various types of user inputs available are illustrated in the example below: You can find out more about how to use this trigger type from the portal or via a mobile app on the Microsoft Docs website. Actions We\u0026rsquo;ve just discussed the component that starts your flow; actions lead on from this by dictating what your flow does. You must have at least one action in your flow, that leads on from your appropriate triggers. Depending on the connector you use, actions can range from performing simple CRUD type operations to more advanced tasks, such as sending an e-mail, creating a file or laying dormant until an approval is received. In the example below, we can see we have an action that retrieves the top 10 Contacts from the system, ordered by the createdon field in descending order, whenever an Account record is updated:\nControl Although technically an action type, it is worth studying the various control actions available to us, all of which allow us to implement programming-like logic flow into our Power Automate flows:\nCondition: These are akin to your traditional if programming tests, allowing you to perform different actions, based on whether the condition is met successfully or not. You can specify multiple conditions as part of this, using AND/OR logic, and also group conditions together if needed. An empty condition within Power Automate resembles the below: Apply to each: You will use this condition the type the most when processing multiple result sets, and then work on a similar principle as your traditional foreach programming loops. By specifying an appropriate collection or list of items, you can then execute one or multiple actions affecting each record. This control type can be useful if, for example, you need to update a list of CDS Contact records in bulk whenever an Account record is updated. Do until: This is useful for when you need to carry out an action until a condition is true, and they work on a similar basis as a while loop. You specify the control to keep checking each time any sub-actions complete and, once the condition is met, execution will stop. By using the built-in expression language within flow in conjunction with this (more on this shortly), it is possible to construct more complex conditions to evaluate continually. Scope: These are more of a visual helper tool, as opposed to something that can be linked back to a programming concept. They provide a mechanism to group multiple, related actions so that they can be collapsed/opened more easily within the visual designer. They are a purely optional component and - to be honest - I had to look up exactly what they were as part of writing this blog post. 😂 Switch: Out of all of these, the name of this one very closely mirrors its equivalent C# programming feature. Using it, you can evaluate a specific value and then call action(s) based on this value. Finally, if it\u0026rsquo;s not possible to determine a matching value, you can instead execute a default action. Depending on your scenario, a Switch could be a far more natural solution to use compared with Conditions. Terminate: Finally, this action lets you immediately stop the flow, with a high degree of control over how to end it. Their nearest programming equivalent would be a throw, but the main difference here is that there are three different statuses that you can terminate a flow at: Failed Cancelled Succeeded You can also (optionally) display an error code or message as part of this. They can act as a useful means of forcing bespoke errors within a flow, if, for example, a business process or control has been violated by not including a specific field as part of creating a new record. Control actions can be an incredibly powerful tool, allowing a single flow to carry out widely different actions, based on the data that traverses through it.\nWorking with Expressions We saw in last weeks post how canvas apps have an expression language, allowing us to programmatically trigger actions or alter how an app behaves, using an easy to use a range of formulas. Power Automate works on a similar basis, by having a separate expression language, that is a useful tool when needing to perform a complex evaluation of conditions. The language derives from the Workflow Definition Language (WDL) used by Logic Apps, and it is possible to start working with expressions via a dedicated pop-up dialog window that appears:\nThis view will display a list of the most commonly used expressions, along with their appropriate syntax, that you can then include in your flow by clicking each one, modifying it within the expression language bar and then pressing the OK button.\nFor the MB-400 exam, it is impossible to discuss, learn and demonstrate every possible function that is available to us - as indicated by the online reference guide, the number of currently available functions is well into the hundreds. Instead, it\u0026rsquo;s worth focusing on a few of the more commonly used ones in each category, so you can get a feel of what is broadly possible to achieve:\nconcat: One of the many possible string functions available to us, this allows you to combine two or more strings. For example, using concat(\u0026ldquo;This \u0026ldquo;, \u0026ldquo;is \u0026ldquo;, \u0026ldquo;a \u0026ldquo;test.\u0026rdquo;) would return This is a test. Other functions in this category cover everyday situations such as converting a string to upper case or returning a selection of characters from a string. length: This will return a number, indicating the number of items/records within a collection that you specify. Note that as a collection function, this can only be used with arrays or strings that conform to a JSON structure. Within this category of functions, you can also do things such as union joins or returning the very first item in a collection. and: As a logical comparison function, this function will evaluate all values supplied to it and, if matching, return true; otherwise, false. The full range of logical comparison functions ranges from greater than, less than, or and even if statement evaluations, allowing us to fine-tune the conditional nature of our flows further. int: Allows you to convert a string into an integer value, provided it is a valid numeric value. There are conversion functions aplenty within Power Automate, of which int forms a part of, and other functions of this type allow us to create a brand new array value, decode base64 strings or create a URI-encoded version of a web URL. min: Based on evaluating an array or list of numeric values, this function will return the lowest value detected. Power Automate has many different mathematical functions at our disposal, allowing us to perform simple additions, divisions or even generate random numbers. utcNow: This function returns the current date and time within UTC format. It\u0026rsquo;s one of the many date and time functions available to us. Others include functions to add a specified number of days to a date, convert a date into any given timezone or return the day portion of a date. variables: Returns the value of a variable that you have specified within the flow, which can be useful for when you are performing conditional evaluations. This function is one of the workflow functions, with other functions available to return the individual results of actions within a scoped action, metadata relating to the flows execution or the output of an action. To find out more about working with variables, you can check out this very useful video Matt \u0026ldquo;The D365 Geek\u0026rdquo; Collins-Jones, where he will show you how to set a variable within a Power Automate flow. uriPath: For situations where you need to extract the path value of a URL (i.e. the bit after the domain portion, such as /mypage.html), this function will let you achieve this. There are other URI parsing functions available that do similar things, such as returning the host, port or query from a URL instead. removeProperty: Finally, this function will remove a specified property from a JSON object, returning the updated object for you to process further if required. This function is part of the small list of JSON/XML manipulation functions, with additional functions available to add or update the properties in a JSON object instead or execute simple XPath queries on an XML document. Experienced developers should have little difficulty from grappling the type of functionality available here, which should closely mirror what you will typically find in languages, such as C#. This factor only emphasises further the importance of considering flow early on as part of designing your solution.\nDemo: Creating a Power Automate Flow To get a feel for how to work with Power Automate flows, check out the video below:\nPower Automate is a worthy topic for many blog posts and videos, and I wouldn\u0026rsquo;t worry too much about becoming an expert for the MB-400 exam. Just take time to familiarise yourself with the use cases and broad capabilities they bring to the table. In next week\u0026rsquo;s post, we\u0026rsquo;ll round up our discussion on processes by looking at Business Rules, an amazing tool that can help you get further mileage out of your model-driven Power App.\n","date":"2020-01-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-using-power-automate-flows/","title":"Exam MB-400 Revision Notes: Using Power Automate Flows"},{"content":"Welcome to the fourth post in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. Previous posts in the series have looked at areas such as creating a technical design using the Power Platform and working with the Common Data Service (CDS). Today\u0026rsquo;s post will look to complete last week\u0026rsquo;s discussion on model-driven Power Apps and the exam topic area Create and Configure Power Apps, by taking a look at canvas apps. Incidentally, it\u0026rsquo;s worth noting that this exam area measures the following skill areas:\nCreate model-driven apps\nconfigure a model-driven app configure forms configure views configure visualizations Create Canvas Apps\nconfigure a Canvas App develop complex expressions As with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nThat\u0026rsquo;s probably enough boilerplate text for now. 🙂 So let\u0026rsquo;s jump in and find out more about canvas apps!\nCanvas Apps Overview As we saw in last week\u0026rsquo;s post, a model-driven app fits its particular usage case very well. Namely, if you need to provide a highly focused, data-driven experience, without having to concern yourself with fine-tuning the client interface, then a model-driven app is the one for you. A traditional shortcoming with them has, however, always been on doing the opposite of their optimal areas. This includes situations for when you need to:\nDeploy a mobile optimised application that works natively on mobile/tablet devices: Although there is a Dynamics 365 mobile app and the new UI interface is mobile optimised, Microsoft has not designed model-driven apps for mobile users. From an access and usability standpoint, they can be severely limiting, which can hamper their potential usefulness for mobile or field workers. Quickly develop an interface, without resorting to code: Features such as the Power Apps Component Framework (PCF) and Web Resources are the only mechanism given to us to make any significant modifications to a model-driven apps interface, and only then via custom code. Connect to external data sources: Model-driven apps, by default, must surface data that resides solely within CDS. Again, it is possible to bring in other data sources using tools such as Virtual Entities, but this typically only surfaces read-only data and will involve a high level of configuration or custom code to deploy successfully. Integrate specific action prompts with automation tools within Power Automate: Although the CDS can trigger Power Automate flows after a particular database action, it is not possible to trigger their execution via a button press or similar. All of these scenarios, and more, can be tackled successfully using a canvas app. Within this framework, business users and developers alike can quickly build out bespoke applications, using a drag and drop interface, therefore enabling their apps to run on any type of device imaginable. Microsoft often compares canvas apps to tools such as Microsoft PowerPoint from a usability perspective; insomuch that it is possible to develop a bespoke app along the same lines as a slide deck. I would more see canvas apps as being the logical evolution of Microsoft Access, as a cloud-first tool for building out bespoke business applications that, much in the same way as Access, be self-contained within a database (in this case, the CDS) or instead be utilised to surface data from external sources, such as SQL Server. If you are contemplating a migration away from Access soon, then Power Apps is very much one of those tools I would urge you to consider.\nFrom a canvas app users standpoint, a completed canvas app can not only be used from a web browser but via Android and iOS devices, through a particular app. Once deployed, users can access and work with multiple apps within a controlled experience. Any changes you make to a canvas app will be pushed out to users after being published. Also, provided that the app developer has built the app correctly, they can be worked within an offline context and set up to automatically synchronise data back to its online source once a connection is re-established.\nAs well as addressing all of the scenarios that are highlighted in the list earlier, Power Apps also have several other features that can make them advantageous when compared with model-driven apps:\nSupport for a wide range of different input controls, ranging from text fields, sliders, galleries, custom images and even barcode scanning. A powerful expression-based language that you can use to trigger specific actions based on various events or situations that occur within your app. Instant playback capabilities, allowing you to test your app as you make changes. Native integration with several AI-focused features, powered by the capabilities within AI Builder. You will often see canvas Apps billed as a tool that \u0026ldquo;citizen developers\u0026rdquo; can leverage to significant effect, to prevent the need to invest in developing costly business systems that - you guessed it - would require the services of a \u0026ldquo;proper\u0026rdquo; developer. While this concept may raise some eyebrows and concerns, I do believe that Power Apps can be leveraged effectively by traditional CRM/Power Platform developers, to make our lives a lot easier. For example, a canvas app could very easily substitute a scenario where you need to present a bespoke interface within a model-driven app that connects to an external data source. Previously, a Web Resource would be your only route towards achieving this and would not be without its own set of challenges when implementing. In this context, the opportunity that canvas apps provide to make developers job a cakewalk cannot be understated and, if it allows us to adapt to changing business circumstances more readily, all the better.\nCreating a Canvas App The Power Apps admin centre is your go-to destination when creating canvas apps. You can choose to either create a Power App in isolation, within your current CDS environment or bundle it in as part of an existing solution. I would recommend the latter wherever possible.\nWhen first creating the app, we have several options available to us, illustrated in the screenshot below:\nAs the possibilities demonstrate, we have the ability of quickly creating an application based off an existing data source. For initial proof of concept or testing situations, this can be invaluable. Also, app developers can choose to either create from scratch using a Blank app or by selecting an existing App template, which presents some curated scenarios from Microsoft.\nYou must make an important design decision at this stage regarding the layout of the app and whether you wish to tailor it for a phone (i.e. portrait) or tablet (i.e. canvas) layout. It\u0026rsquo;s not possible to override this setting post-app creation, so take care to evaluate what you think will be the potential usage scenarios for your app and select the appropriate setting.\nThe connectors available are, by and large, the same set given to us within Power Automate and Power BI. You should note in particular that specific connectors are marked as Premium. Only users with a paid-for Power Apps license assigned to them will be able to use these connectors.\nWith your chosen data source or app type ready, the canvas app designer window will open and resemble the below:\nI\u0026rsquo;ve numbered on here the most important areas to be aware of, described in detail below:\nThis main menu groups together the various actions you can carry out for your app, broken down as follows: File: Contains settings such as saving, publishing and sharing your app, and working with components such as collections, media or variables. Home: Here, you can add on new screens to your app, customise its interface or modify the display/formatting settings of the currently selected component. Insert: This tab contains a list of all the potential component types you can add to your app, which can be quickly inserted on by clicking the appropriate option. View: Opens up separate tabs where you can view the list of data sources associated with the app, media, collections, variables and advanced settings about the currently selected component. Action: Provides a mechanism to configure expressions for a component based on everyday scenarios. Within this area, you can (from left to right) analyse any issues the app checker has discovered on your app, undo/redo a specific action, test your app, share it with other users or access some of the Power Apps help options. The expression bar displays the formula for the selected setting on the current component you are working with. In this scenario, I have chosen the Fill property for the Screen component, which Power Apps has configured to use the RGB colour for white. By using the dropdown box, it is possible to view (and then modify) the property of any component, using the Power Apps expression language. We\u0026rsquo;ll be taking a closer look at all of this shortly. Expanding this area allows you access a tree view, illustrating all components within your app, insert new ones, view the list of available data sources and access advance tools such as the Monitor. The content in this area will be updated, based on whatever option you\u0026rsquo;ve selected within 4. In the example screenshot, we can see a tree view of the app, that demonstrates it is a three-screen app, containing various components on each of its respective screens. Contained within this area is the visual editor, showing us exactly how our canvas app will look when deployed out. Components can be selected, moved or deleted with ease from within this interface. Based on the component currently selected, this area will update to present a view of its respective properties. You can modify any of these properties, either by using the options suggested within the relevant dropdown fields or via an expression. Here, you can adjust the zoom settings of the app within the main window. As we can see, the canvas app designer provides a lot of useful tools and an excellent IDE for building apps from scratch. In the video later on in this post, we will see how this all works in practice.\nWorking with Expressions \u0026amp; Formulas We\u0026rsquo;ve touched upon expressions already in the previous section. They provide a pervasive and tailorable means of customising an app\u0026rsquo;s behaviour to trigger specific business logic when certain conditions are met. For example, in the earlier screenshot, we could choose to write an expression that adjusts the fill colour, based on the time of day - if between 9 AM-5 PM, set it to white, otherwise set it to black. The formula to achieve this would look like this:\nIf(Value(Text(Now(), \u0026ldquo;[$-en-gb]hh\u0026rdquo;)) \u0026gt;= 09 \u0026amp;\u0026amp; Value(Text(Now(), \u0026ldquo;[$-en-gb]hh\u0026rdquo;)) \u0026lt;= 17, RGBA(255, 255, 255, 1), RGBA(0, 0, 0, 1))\nIn this case, I\u0026rsquo;m using the Now() function to get the current time as a text value, which I then convert into a number. The output of this is then utilised twice as part of an If() function to determine the appropriate fill colour to apply. As part of this, we are using standard operators to perform our logical test. The syntax of the Power Apps expression language should present little difficulty to established developers, as it broadly conforms to most common programming languages. You can best think of them as Excel formulas on steroids, which lends itself nicely to Excel power users are who are exploring Power Apps for the very first time.\nIt is impossible to discuss every possible formula and also to learn them for the exam. What follows is a list of some of the more commonly used ones that you will use the most; I would urge you to review the full list and also experiment further with each one, at your leisure:\nCount: Returns a number indicating the total amount of records within the table object supplied. Navigate: Lets you move a user to another screen within your app. As part of this, you can specify the type of transition to perform after triggering the action. Back: Working on the same basis as Navigate, this is the most straightforward function to use to move a user back to the last screen they were on. Patch: Lets you update or create a new record to the data source specified. You should ensure that all required field values for your data source are defined here, to avoid any errors from occurring. SubmitForm: A special type of function available only via the Forms control, this works on the same basis as Patch, by allowing you to save/create data entered into the form. Update: A more limited version of Patch, this function can be used to update the values of an existing record in the data source you specify. Upper: Converts the supplied text value to uppercase letters, e.g. test would become TEST. There are equivalent functions available to convert text to lower case and also proper case too. Split: Accepts a string value and allows you to separate its value, based on the delimiter you specify. There\u0026rsquo;s a lot of powerful functionality underneath the surface here, which will surely satisfy any class of developer working with canvas apps.\nDemo: Creating a basic Canvas App In the video below, take a look at the actual process of creating a canvas app, from start to finish:\nWith the capabilities available across both types of Power Apps, it is possible to meet any potential scenario from a business applications standpoint. In particular, canvas apps let you tailor the user interface of your app to suit any need. In next week\u0026rsquo;s post, we\u0026rsquo;ll move on from Power Apps to take a look at another product within the Power Platform family - Power Automate - and see how to use this to automate any potential business activity.\n","date":"2020-01-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-working-with-canvas-apps/","title":"Exam MB-400 Revision Notes: Working with Canvas Apps"},{"content":"Welcome to the third post in my series, focused on providing a set of revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In last week\u0026rsquo;s post, we took a massive deep dive into some of the core customisation topics that you need to have a good grasp of when tackling the exam. This week, we\u0026rsquo;ll be taking a look at the equally huge topic area Create and Configure Power Apps, which measures the following skill areas:\nCreate model-driven apps\nconfigure a model-driven app configure forms configure views configure visualizations Create Canvas Apps\nconfigure a Canvas App develop complex expressions Although this area has reduced weighting compared to last week\u0026rsquo;s topic (10-15%), you will nevertheless need to have a good understanding of both flavours of Power Apps and their core differences. The focus of today\u0026rsquo;s post is on working with model-driven apps, specifically; next week\u0026rsquo;s post will dive deeper into working with canvas apps.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nWith the intro out of the way now, let\u0026rsquo;s dive straight in and take a look at model-driven apps.\nModel-Driven Apps Longstanding Dynamics 365 developers will be most familiar with the model-driven app experience, as it\u0026rsquo;s virtually identical to the inherited user experience previously provided by Dynamics CRM. Targeted towards data-driven applications, that feed directly off the Common Data Service (CDS), they offer a modern approach to creating a bespoke, targeted business application, that exposes the most useful components necessary for a user to complete their job. With a model-driven app, developers can bundle together the following components:\nSite Map: Whereas previously, developers would have to use the XrmToolBox Sitemap Editor to perform sitemap amends, we can now fully customise the sitemap within the application, using a drag and drop interface. And, unlike previous versions of Dynamics 365, multiple sitemaps can exist and scoped to a single model-driven app. This enhanced experience provides us with the capability to have bespoke sitemaps for every model-driven app deployed to a Dynamics 365 / CDS instance. To find out more about this new SiteMap customisation experience, the following tutorial article on the Microsoft Docs website shows you how to work with this feature in-depth. Dashboards: It is possible to configure one or several different dashboards that will display for a user when they first load a model-driven app. Business Process Flows: Like Dashboards, we can select one, none or several different Business Process Flows that users will have access to within the app. Entities: Developers can select as many or as few entities to expose within an app. Then, at a more granular level, it is possible to choose specific components from the following categories: Forms Views Charts Dashboards Also, you can configure model-driven apps with the following top-level properties:\nName Description Icon: Developers can use a default icon or upload a new image for the app, that will render when the user is selecting the app from the explorer bar. Welcome Page: If required, an HTML Web Resource document can be uploaded as a welcome page to users when they first launch the app. Enable Mobile Offline: You can enable model-driven apps for offline use by selecting an appropriate Mobile Offline Profile. The screenshot below shows an example of how the default Sales Hub model-driven app looks when deployed out:\nTypically, model-driven apps are best suited for back-office situations, when users are working off a fixed PC/laptop or have a need to interact with CDS data directly. However, the new unified interface (UI) provides a mobile responsive template that works effectively across any device type.\nMerely setting up a model-driven app or deploying one out will not be sufficient for ensuring that users can access it correctly. As outlined in this article, it will be necessary to not only grant security role access to all Entities within the app but also to the app itself.\nAs part of your development cycle for a model-driven app, the process of building the app would probably be last in your order of priority. First, after ensuring you have customised your required entities appropriately, you would proceed next to building out the forms, views, charts and dashboards for the entity. Let\u0026rsquo;s take a look now at how to do this from within the App Designer.\nWorking with Forms Forms are the layer that exposes data for access and modification within a model-driven application. As customisers, we can create four types of forms for each entity:\nMain: These are the standard forms that will be exposed via the web interface and also via the mobile Dynamics 365 app, with some limitations. They are also the most common type of forms to work with. Quick Create: It is sometimes desirable to allow users to quickly add new records to the system, without having to necessarily drill-down first into the entity in question. Quick Create forms meet this objective, by allowing users to select the + icon at the top of a model-driven app that then loads a specialised, condensed form, containing only the values that you need to specify on record creation. The example below shows how the Contact Quick Create form renders inside a model-driven app: Quick Create forms do have several limitations, though. For example, you cannot assign them to particular security roles to curtail access to them, and you can\u0026rsquo;t customise their Headers or Footers. They do, however, support the ability to add on custom event handlers via JScript form functions or Business Rules. Quick View: For situations where you need to expose several information points from a related entity, Quick View forms are the best tool to use. They allow you to specify and arrange several fields from an entity into a read-only control, that you can then add onto a related entity form as a reference point. Provided that the related entity\u0026rsquo;s lookup control has a valid value, the information will then load through; otherwise, the Quick View form will not render. Quick View forms have the same limitations as Quick Create forms and, because they present a read-only view, do not support custom event handlers or Business Rules. Card: A new feature for unified interface model-driven apps, these form types work in the same manner as Quick View forms, but are instead explicitly optimised for viewing within a mobile device. However, unlike Quick View forms, they are instead added to an existing subgrid form as a custom control. For example, in the screenshot below from the Account form editor, we can see that I\u0026rsquo;ve added the Contact Card form onto the Contact subgrid: This control will then render as follows within a UI model-driven app: To find out more about the different form types available, and how to work with them, the following Microsoft Docs article goes into great detail on this entire subject.\nThere are no restrictions over the number of forms we can define for an entity. Still, it is generally best practice to build forms to meet specific business areas and then distribute these out for access via a security role. However, take note that you must always have at least one main form for an entity defined as the fallback form, without any security role privileges associated with it. This step ensures that a user always has access to at least one form so that you do not impede them when they\u0026rsquo;re working with an entity.\nThe actual process of working and modifying forms will, in most cases, take place within the recently released Power Apps form designer, as indicated below:\nThis modern experience provides numerous benefits over the traditional, classic designer, primarily in:\nSurfacing a \u0026ldquo;what you see is what you get\u0026rdquo; editor, allowing you to drag around, re-size and manipulate around components and instantly see how your changes will look across multiple devices. Simplifying the process of adding new fields and components onto a form. Allowing form customisers to quickly \u0026ldquo;cut and paste\u0026rdquo; components or fields to new locations on the form. Developers can still customise forms using the classic editor if required. This step will be necessary if you are trying to achieve any of the following tasks:\nAssociate a Web Resources to a form or setup JScript event handlers. Insert specialised controls to a form, such as Web Resources, IFrames, Bing Maps etc. Configure any setting relating to the presentation or user experience within the classic interface. However, wherever possible, you should ensure that you use the new form editor experience as, over time, Microsoft may choose to remove the classic form editor.\nFrom a developers perspective, you will typically work with forms in several different contexts:\nWhen defining and associating a Business Rule to a form. In adding and tailoring event handlers for any custom form functions, which you would typically author in JScript or TypeScript. For when you need to bind a Power Apps Component Framework (PCF) control to a field or sub-grid. When rendering custom content via an IFrame or Web Resource. Therefore, having a good awareness of the various properties available as part of the classic form editor (for now at least) will be crucial concerning the exam.\nForm customisation is a broad topic, that can take some time to understand fully. Be sure to fully read through the series of Microsoft Docs related to the new form designer, as well as gaining a full understanding of the classic form editor too.\nCreating Views Views are the primary mechanism through which multiple records are\u0026hellip;well\u0026hellip;viewed and interacted with as part of a model-driven application. Similar to forms, an entity can have numerous types of views defined for it and, as customisers, we can set up as many different types of Public Views that we would like for an entity. It is also worth noting that there are several other view types, all of which are created by default when an entity is first created and can be modified further if required:\nAdvanced Find View: This defines the columns and sorting behaviour of data that is returned by default (i.e. if the user does not override any of these settings) via an Advanced Find search Associated View: When rendering records from a related entity on a form, the application uses this view by default. Lookup View: The Lookup View appears whenever you search for records within a lookup field or control. Quick Find View: This is the default view that appears when searching for a record using the applications Quick Find functionality. Any column within this view that the system or yourself defines as a Find column will also, I believe, be indexed within the database, thereby leading to faster searches when querying this field. Primarily, you will want to use the new view designer when working with views, which I\u0026rsquo;ve illustrated an example of in the screenshot below:\nWithin the view designer, you can:\nAdd on any column from the primary or parent, related entity (e.g. add on the name of the Primary Contact from the linked Account record) Adjust the width of a column and its placement within the view. Define as many sortation rules as required for any data returned via the view. Build out the required list of filters to apply to the data before the application returns it to the end-user. Modify the name and description of the view. However, there may be situations where you have to revert to the classic view editor, which looks like this:\nWhen compared with the Form editing experience, the potential usage cases for the classic view editor are, at the time of writing this post, limited. Namely, you will only want to use it when adding on custom icons to a views column properties, via a Web Resource or when adding a PCF control to a view. These, incidentally, will probably be only situations where a developer needs to interact with and understand views. Some restrictions also exist within the classic editor. For example, you can only specify up to two Sort Order rules within the classic view editor. Therefore, there is no good reason not to use the new view designer wherever possible and, similar to forms, expect that all missing functionality to be eventually migrated across into the new view designer.\nThe subject of views typically covers an entire module as part of a customisation course, so it\u0026rsquo;s impossible to discuss them in-depth when it comes to this exam. I would recommend thoroughly reading through all of the Microsoft Docs articles on the subject of views and, in particular, familiarise yourself with the two specific developer scenarios mentioned in the previous paragraph.\nDashboards \u0026amp; Charts The final piece of the model-driven app puzzle - and perhaps most important aspect, from a senior business users perspective - is bundling together additional visualisations and aggregate information into a single layer, ideally quickly accessible from within the model-driven app itself. We\u0026rsquo;ve already seen how to use views to assist in this regard - by presenting a tabular list of data, often sorted and filtered accordingly - but these only form a small part of the overall picture. Extending further from this, firstly, is the concept of charts, followed not long after by dashboards.\nCharts Charts allow customisers to quickly define a range of standard visualisations that can then be bound to any entity view within the application. Set at an entity level and accessible from within the new Power Apps centre, they are created in the classic interface, as indicated below:\nIn this example, the out of the box Account by Industry chart is displayed, which renders a simple Bar chart, that counts up all Account records in the system, grouped by Industry.\nAs customisers, we can create the following chart types for a model-driven app:\nColumn Standard Stacked 100% Stacked Bar: Standard Stacked 100% Stacked Area: Standard Stacked 100% Stacked Line Pie Funnel Tag Doughnut Once you\u0026rsquo;ve decided on the type of chart to use, it will then be necessary to define:\nLegend Entries (Series): As part of this, you will also need to define the aggregation type to use for the data. The following aggregation options are available: Avg Count: All Count:Non-empty Max Min Sum Horizontal (Category) Axis Labels It is possible to specify multiple series/categories, depending on your requirements.\nIn most cases, pretty much all of the standard entities support charts, but it is worth reviewing the precise list of compatible entities before deciding whether to use them or not.\nOnce you\u0026rsquo;ve created a chart, it can be added onto a dashboard (discussed shortly) or rendered alongside an existing Public View, via the Show Chart button:\nAlso, for situations where you are unable to get the chart rendering or displaying precisely how you want to, developers can choose to export the underlying XML definition for a Chart and modify it accordingly. This option opens up a range of additional options, allowing you to tweak how the Chart ultimately renders within the application. Your best source for discovering the types of things achievable via this method is by consulting Ulrik \u0026ldquo;The CRM Chart Guy\u0026rdquo; Carlsson\u0026rsquo;s blog, which is an absolute treasure trove on this subject.\nDashboards Having lots of brilliant views and charts within your model-driven app is all well and good, but useless if these remain inaccessible or diffuse from each other. For this reason, dashboards are incredibly useful and, from a model-driven app perspective, the critical thing that is presented to users when first opening a model-driven app. They provide a blank canvas, allowing customisers to bring together the components mentioned so far, alongside others, in a highly focused view. There are two types of dashboards we can create - the first, from an Entity itself, are called Interactive experience dashboards, and you can see the editor for this type of dashboard below:\nFor these types of dashboards, you can add on both Charts and Streams AKA views. Also, you can define the underlying entity view they are bound to and even the period on which to return records. In short, Microsoft has designed them to provide a more immediate view of data that may require priority action or review. A good usage case for interactive experience dashboards includes a support desk or emergency contact centre dealing with incoming caseloads. As a component that is tied strictly to a single entity, it is not possible to bring in charts or views from any related entity into these types of dashboards.\nIt is also possible to create standard dashboards untied to a specific entity. Dashboards of this type have a few more options available to them, as the editor for them attests to:\nAs well as supporting views and charts, we can also add the following components to these types of dashboards:\nWeb Resources: Any Web Resource deployed to the system can be rendered from within a dashboard. Timelines: This is a custom control type, available only within the UI, that lets you see a complete history of previous activities or other notable information relating to customer or contact. You can find out more about them on the Microsoft Docs website. IFrame: You can embed custom content from any web page into a dashboard via this control type. In most cases, standard dashboards will be your preferred choice, given that they support far more options and do not face the same restrictions from a timeframe perspective.\nDevelopers will typically work with dashboards whenever there is a need to pull in content from external sources, using the two-component types referenced earlier - Web Resources or IFrames. Unlike charts, it is not possible to export the definition of a dashboard and modify its visual appearance; therefore, in most cases, you will strictly work with them from within the web interface, and then bundle them up within a solution or model-driven app.\nDemo: Creating a Model-Driven App, Form, View, Chart \u0026amp; Dashboard The best way of seeing how all these components fit together is by building an app from scratch. Take a look at the below video, which will take you through all steps from start to finish:\nThere are a lot of powerful capabilities within model-driven apps, that can often negate the need to develop alternative solutions. Having a good awareness of these topics will help Dynamics 365 and Common Data Service developers in building an effective solution quickly, without necessarily resorting to code. Tune in next week, when we\u0026rsquo;ll be taking a closer look at the other type of Power Apps - canvas apps.\n","date":"2019-12-29T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-working-with-model-driven-apps/","title":"Exam MB-400 Revision Notes: Working with Model-Driven Apps"},{"content":"Welcome to the second post in my series, focused on providing revision notes for the MB-400: Microsoft Power Apps + Dynamics 365 Developer exam. In last week\u0026rsquo;s post, we looked at some of the fundamental concepts and theory around the Power Platform, in support of the first learning objective - Create a Technical Design. Next, we\u0026rsquo;ll focus towards some more practical focused content, as we dive into the Configure Common Data Service topic, which has the following learning objectives:\nConfigure security to support development\ntroubleshoot operational security issues create or update security roles and field-level security profiles Implement entities and fields\nconfigure entities configure fields configure relationships Create and maintain solutions\nconfigure solutions import and export solutions manage solution dependencies With a 15-20% total weighting on the exam, there will undoubtedly be many questions testing your knowledge of fundamental customisation topics - more so when compared with previous iterations of this exam.\nAs with all posts in this series, the aim is to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform if you want to do well in this exam.\nBut let\u0026rsquo;s stop beating about the bush and jump into the above topic area in closer detail 🙂\nHow Security Works in the Common Data Service The Common Data Service (CDS) / Dynamics 365 provides an incredible amount of features to help implement even the most stringent access requirements. As such, developers can leverage this existing functionality to help speed up development and focus on building a workable solution, rather than divert time/effort towards creating a sophisticated security model out of the box. At a high-level, the following features are made available to us from a security standpoint:\nBusiness Units: These act as hierarchical containers for your data, allowing you to restrict access to records, based on a logical structure. Typically, this may be mapped based on an organisational structure or geographic locations within a global organisation. All user accounts must exist within a single business unit, and there is a requirement that all CDS environments have at least one root business unit. For further details on this topic, please refer to the following Microsoft Docs article. Security Roles: The cornerstone of security within the application, Security Roles provide users with the permissions needed to interact with CDS data. We\u0026rsquo;ll be taking a much closer look at this feature in the next section. Teams: Although typically used to sub-categorise users within a Business Unit, Teams can be assigned Security Roles, to help you simplify assigning privileges in bulk. It is also possible to configure an Access Team template, allowing end-users to quickly grant permissions to a record, without the need for any further customisation. Find out more about how to manage Teams with the following Microsoft Docs article. To find out more about access-teams and how to set them up, please consult this article instead. Field-Level Security Profiles: Sometimes, it may be necessary not only to restrict a whole entity but also the contents of a specific field. An excellent example of this involves credit card numbers. Although it is desirable to grant users the ability to enter this information on record creation, restricting access beyond this point would be highly beneficial. Field-level Security Profiles can help meet this requirement and will be discussed in more detail shortly. Record Sharing: All users within the application, provided they\u0026rsquo;ve been assigned the relevant Share privilege at a Security Role level, can grant temporary or indefinite access to records. This feature can be useful for situations where someone is going away for a few weeks, and you need to provide temporary access to a colleague covering this absence. Developers have the flexibility to use one or several of these features when building out their solution. Typically though, Security Roles will be the one area where most attention is diverted to, particularly when you are creating new entities.\nDiscussing Security Roles Users accessing the CDS / Dynamics 365, either via a model-driven app, Power BI or another mechanism, must be assigned a security role. The Security Role defines which entities, features and other components within CDS the user can manage. By default, all CDS instances come equipped with several out of the box roles, indicated below:\nAt the time of writing this post, although it\u0026rsquo;s possible to add Security Roles to a solution within the new Power Apps portal, creation and modification of them takes place within the classic interface.\nSecurity role permissions can be broke down into two broad categories - record-level and task-based privileges**.** Also, access level privileges will apply to certain entities. The bullet points below provide a general summary of each of these topics:\nRecord-level Privileges: Available for all entities, regardless of their ownership type, these permissions define the specific access privileges that a user can achieve against an entity. These, by and large, following a CRUD concept for persisted storage (Create, Read, Update and Delete). Specifically, the full list of privileges are: Create: Allows the user to create new records for this entity. Read: Allows you to read the record entirely, but make no changes. Write: Allows you to modify existing records of this entity. Delete: Let\u0026rsquo;s you delete the entity record in question. Append: Allows you to associate the entity with another entity record Append To: Allows other entity records to be related to this entity record. Assign: Allows you to re-assign the record to another user Share: Let\u0026rsquo;s you share the record to other users. Task-Based Privileges: These typically allow you to carry out a specific, non-entity bound action with the application. For example, you can grant permissions for users to Create Quick Campaigns or Assign manager for a user. Access-level Privileges: Going back to Business Units, the CDS security model lets you define whether users can access records within their Business Unit, in ones directly underneath them or across the entire organisation. These granular level privileges are only available if you have configured an entity with an ownership type or User or Team. Entities with Organisational level privileges do not support this and, instead, it is necessary to grant all or zero permissions. The full list of levels available for selection within the application are: Organization: Users allowed this level of access to an entity have unrestricted access across the system, regardless of which business unit the record resides within. Parent: Child Business Units: With this privilege level, users can interact with all records of the entity in their current Business Unit or any child Business unit, regardless of depth. Business Units: At this level, users can only interact with records in their current Business Unit. Depth access does not apply in this scenario. For example, if I\u0026rsquo;m granted Read Business Unit level privilege, I\u0026rsquo;d be unable to read any records within a child, grandchild etc. business unit. User: The most restrictive level, only records which I own or shared to me will be accessible if granted this level. Altogether, Security Roles provide a robust, granular and flexible approach to locking down areas of the application or ensuring that users only get to interact with records they own, for example. To find out more about Security Roles, please refer to the following Microsoft Docs article. Security Roles is a massive topic in of itself, and impossible to cover in-depth as part of a single blog post.\nField-Level Security Profiles We\u0026rsquo;ve already touched upon just what Field-Level Security profiles are, so let\u0026rsquo;s dive into some of the more finer points of their functionality.\nA system administrator/customiser must create a Field-Level Security Profile and, in most cases, will be included as part of a solution. The 2-step setup process will involve:\nEnabling the Field Security property on each field to secure: Developers must do this on every field that requires securing, and there are no settings that require enabling at the entity level first. Certain field types, such as the Primary Key field, are not supported for field security. Update or create a new Field Security profile, defining your required privilege level: A profile will expose all fields enabled in the system for field security, for which you must then specify the following settings: The users or teams that the profile applies to. The actual permissions to grant to the users or teams assigned to the profile. The following permission types are available; it is possible to mix and match these privileges, to suit your requirements: Read Create Update To find out more about how to setup field-level security and profiles, you can refer to the following Microsoft Docs article.\nTroubleshooting security issues Typically, the bane of any Dynamics 365 / CDS administrators life will be in resolving access or security-related issues, which can be tricky to navigate and fall through the net during any UAT testing. The list below provides a flavour of some of the things to watch out for when working with some of the security features discussed so far:\nReview any generated error message carefully. These will almost always indicate the missing privilege and the affected entity, thereby allowing you to modify any assigned security role accordingly. Where possible, try and base any security role off an existing, out of the box role and tailor it accordingly. There is a myriad of minimum privileges required to open a model-driven app in the first place, which can prove challenging to figure out when creating a security role from scratch. A field enabled for field security will only be visible to system administrators within the application if no similar field security profile has been set up and assigned within a model-driven app. Consider the impact this may have when enabling this property for this first time. Demo: Setting up a Security Role and Field-Level Security Profile To get a flavour of what is involved in setting up a new security role or field-level security profile, check out the video below:\nEntities Overview Entities are the core objects within Dynamics 365 / CDS. From a developers perspective, they are in effect tables within the backend SQL Server database of the platform, used to store individual record data for each of our different data types within the system. Many entities, covering common business scenarios and adhering to the Common Data Model, are given to us by default. For example, the Account entity contains all the essential information we may store regarding companies that your organisation works with daily. As developers or customisers of the system, we can go in and create new entities, to cover bespoke requirements. Also, we have the ability to:\nModify the properties for an entity. For example, we can enable an entity for SharePoint Online document management functionality. Add new custom fields to an entity, to record missing information required by our organisation. Setup or modify existing relationships between entities to, for example, ensure specific field values are mapped across automatically. In short, we have a range of features at our disposal to store any potential type of information within Dynamics 365 / CDS, allowing us to leverage additional built-in features, where required.\nEntity customisations is a topic area that is impossible to cover as part of a single blog post and is typically the focus of 2-3 day courses to fully grasp. The purpose of the next few sections is to focus attention on the core concepts that you will need to have in the back of your mind when tackling the exam.\nWorking with Entities When first contemplating whether to create a new entity or not, you must make some critical decisions regarding how to configure the entity, including:\nName attributes: All entities must have a Display Name, Plural display name and Name (i.e. logical Name) value specified for them. Primary Field Details: All new entities must have a text field defined for them, that represents the value shown for each created record within the interface. We have full flexibility to specify the display and logical name value for this field, but its underlying data type (Single Line of Text) cannot be changed. Attachments/Notes: Entities created can be set to work alongside attachments and notes within the application. Take a look at the following blog post to find out more about this feature. Description: You can provide a useful explanation of what the entity is here, to better inform others as to its purpose. Entity Type \u0026amp; Ownership: Here, two critical options can be specified: Entity Type: Specifies whether it is a Standard or an Activity entity. Typically, you should select Standard for your entity, unless you wish to use it to record a specific type of activity (e.g. Home Visit, WhatsApp message etc.) that must be visible within the Social Pane (i.e. the Notes control) in the application. Ownership: As alluded to earlier, this will affect whether the entity is subject to more granular access level controls, via the Business Unit hierarchy in the system, or not. In most cases, unless you are sure the entity needs to be accessed by everyone within the organisation, you should select the User or team option. Create and Update Settings: Within this area, an entity can be configured for use alongside Quick Create forms, enabled for duplicate detection or setup for change tracking. Dynamics 365 for Outlook Settings: Here, you can enable the entity for offline use within the Dynamics 365 for Outlook application. Note that this refers to the desktop application, not the online-only Dynamics 365 App for Outlook. Most of the standard options for entities will be visible within the new Power Apps portal, as indicated below:\nAny other setting not visible here will instead be visible in the classic portal.\nFor existing entities, for the most part, we can carry out the following actions:\nModify its display and plural display name value. Change its description. Enable or disable additional features, such as support for Queues or Feedback. Delete the entity (custom entities only) However, some system entities may behave differently or have certain features permanently disabled/enabled.\nAlthough not a mandatory requirement, I would highly recommend to carry out all entity customisations within a solution; we will discuss further details on this topic later.\nFor more information regarding entities, you can consult the following Microsoft Docs article.\nFields (AKA Attributes) Fields are the specific attributes to record data. In SQL Server, we would typically refer to these as columns. We can modify or create new fields based on a wide range of different field types available to us. When creating one from scratch, we must specify the following details:\nDisplay \u0026amp; Logical Name Type: e.g. Single Line of Text, DateTime etc. Business Requirement: Here, you can specify whether users must always specify a value for this field before saving the record. By default, this option defaults to Not Required. Searchable: Enabling this option will allow customisers or Advanced Find users to use this field when creating views or searching for data. My understanding is that the application adds any field marked as Searchable to indexes behind the scenes, thereby speeding up any searches performed; I\u0026rsquo;d, therefore, recommended enabling this property when you anticipate frequent querying of any data. Calculated or Rollup: Specifies whether the field should be set up as a calculated or rollup field type. These field types are typically most useful to generate aggregate information or for use within a reporting solution. You can find out how to work with these field types in more detail, by reading through the calculated fields and rollup fields Microsoft Docs articles. Description: Here, you can provide a useful summary of the purpose of the field. Any value saved within this field will then get displayed to users within a model-driven app, whenever they hover their mouse over the name of the field. For each field and its corresponding type, we can fine-tune additional details relating to it. The following article summarises some of these properties in further detail. It is impossible to cover all potential scenarios within this blog post, so I would encourage you to experiment with creating all possible field types within a test environment.\nComparing 1:N, N:1 and N:N Relationships When modelling a SQL database, it is desirable to create multiple tables, with any required links implemented via FOREIGN KEY relationships. This type of modelling not only allows you to create hierarchical relationships if desired but also to ensure that your solution remains scalable. Dynamics 365 / CDS leverages the built-in functionalities within SQL Server, by allowing you to define several different types of relationships within the application:\n1:N \u0026amp; N:1: These relationships are effectively the same, and describe either a one to many (1:N) or many to one (N:1). Regardless of their configured direction, they allow you to have a single parent record, with many related records. An example of a system 1:N relationship is the one between the Account and Contact entities; a single Account can have many associated Contacts. N:N: Many-to-many (N:N) relationships are a bit more usual, primarily because of the way they can be customised. They describe a situation where you must have many entity records related to many other records. An excellent example of the type of scenario where this may be useful is if you have Event and Attendee entities set up within the system; many events can have many attendees. Therefore, this would be an appropriate use case for a many-to-many relationship. As mentioned earlier, these types of relationships are an oddity, as you can configure them in one of two different ways: Native N:N: This is where you let the system wholly manage the relationship and its setup for you. Behind the scenes, the system will create a hidden intersect entity to record all N:N relationship instances. This entity will remain inaccessible and cannot be customised further. This is the default and recommended option if you do not need to record additional properties relating to your N:N relationship. Manual N:N: In this scenario, you create the intersect entity yourself. Then, you set up 1:N relationships between your two different entities to the intersect entity. As you have full control over the intersect entity, you can customise this entity further to record additional attributes. This type of N:N relationship is most suitable for advanced scenarios only. Typically, you would navigate to the Relationships tab within the Power Apps portal to create these. However, you can also create a relationship by adding a field of type Lookup to your entity. To find out more about the different types of relationships and, specifically, native and manual N:N relationships, consult the following Microsoft Docs article.\nOnce a relationship has been created, you can also specify additional options concerning field mappings. Field mappings allow for data to be quickly copied across to new records when created from a primary record. For example, field mappings exist between the Lead and Opportunity entities, meaning that specific fields will automatically copy across to the Opportunity entity whenever a Lead is qualified. In most cases, you will need to ensure that the source and target field are of the same type and a target field cannot be subject to multiple source mappings. Further details on this feature and how to begin working with it can be found on the following Microsoft Docs article.\nDemo: Entity, Field and Relationship Creation The process of creating everything mentioned so far can take some time to complete. The video below aims to demonstrate how to achieve all of this, using the new Power Apps interface:\nSolutions Overview Solutions are almost certainly a mandatory requirement when building any customisations involving Dynamics 365 or CDS. As well as acting as a container for all of the custom components you have developed, they are also useful in:\nUniquely identifying your components compared to other developers, projects or external functionality, when used in conjunction with a Solution publisher/prefix. Providing a precise and controlled mechanism for deploying out changes across multiple environments, either in their entirety or via patches. Enabling you to work with a subset of components Let\u0026rsquo;s dive now into the central topics of Solutions that you developers need to grasp as a minimum\nManaged versus Unmanaged Solutions There are two types of solutions that Dynamics 365 / CDS supports - managed and unmanaged. By default, all solutions exist in an unmanaged state upon creation; only by exporting a solution for the first time do you have the option of making it managed. The key differences between both types of solutions are:\nUnmanaged Solutions of this type are removable from an environment, but all underlying components/changes will remain; these will have to be removed or reverted manually. Developers/customisers can freely modify components within an unmanaged solution. Recommended for exporting solutions to other development/testing environments or for storing as part of a source control system (e.g. Git) Managed Solutions of this type are removable from an environment. This action will permanently delete all underlying components, including any entity data. Typically, other developers/customisers will be unable to modify components within a managed solution, unless this has been enabled via each components managed property. When importing an updated managed solution to an environment, all existing components within the current managed solution are overwritten completely. Recommended when deploying a final, thoroughly tested solution into a production environment. Developers should always be mindful of how unmanaged/managed components expose themselves in the application as part of solution layering. The diagram on the following Microsoft Docs article provides an excellent summary and visualisation of how solutions are applied within an environment. You can also use the solution layering feature to inspect how components have been affected by any solutions applied to an environment.\nSolution Publishers \u0026amp; Prefixes When creating a solution, developers and customisers should always specify a solution publisher. This helps to identify components belonging to a particular organisation or business area and avoids a bad practice situation where customisations or components are prefixed using new_. It is possible to create a new publisher and prefix when creating a new solution. The following details are required at a minimum:\nDisplay Name Name Prefix Option Value Prefix As a rule of thumb, generally, a single publisher/prefix is sufficient for one organisation.\nTo find out more about how to customise solution prefixes, you can consult the following Microsoft Docs article.\nSolution Patches In specific scenarios, there may be a business-critical change that needs to be rushed out, without the need to performing a full update of all components within a solution. Solution Patches provide a mechanism to deploy out small changes within an overall solution, targeting only the segments that require changing. This action creates a new, separate solution, containing only the underlying components that need pushing out. Later on, all patched solutions can be \u0026ldquo;rolled up\u0026rdquo; into the master solution as part of a regular update or upgrade.\nI\u0026rsquo;ve blogged previously on how to work with solution patches; although these steps do refer to the classic interface, they will be mostly the same within the new Power Apps experience. The following Microsoft Docs article provides more up to date information regarding their functionality.\nUsing the Solution Checker Developers building out solutions must always be aware of not mistakenly putting in place solutions which do not follow best practice approaches or contain functionality that could be subject to deprecation in future. Unfortunately, it is not always straightforward to perform this kind of in-depth analysis. Recognising this fact, Microsoft has made available a Solution Checker tool, which can automatically scan any solution and highlight the types of issues mentioned earlier. You must first install the Solution Checker from AppSource before it can be used; once installed, you can then run it against any solutions within your CDS environment or whenever you export a solution from your environment. The Solution Checker can take up to 10 minutes to run, depending on the size and complexity of the components contained within. Once executed, you will then be able to view the results and export them from the Power Apps portal, as indicated in the screenshot below:\nTo find out more about the Solution Checker tool, you can refer to the Microsoft Docs article dedicated to the subject.\nDemo: Working with Solutions To find out how to create a solution and export it from Dynamics 365 / CDS, check out the video below:\nThe scale of functional topics covered by this exam emphasises more than ever the importance of being aware of what Dynamics 365 and the Common Data Service provides to developers out of the box and why these features should be leveraged wherever possible. Hopefully, this post has familiarised yourself with these topics sufficiently. In next week\u0026rsquo;s post, we\u0026rsquo;ll dive into Power Apps and how to create both model-driven and canvas apps.\n","date":"2019-12-22T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-configuring-the-common-data-service/","title":"Exam MB-400 Revision Notes: Configuring the Common Data Service"},{"content":"Recently, I blogged about an exciting new exam for Dynamics 365 and Power Platform developers. Off the back of this, I wanted to do another series of posts providing revision notes, to help support others who may be sitting the MB-400 exam in future. Similar to my previous series all around the Power BI Exam 70-778, each post will break down the list of skills measured, focusing on the essential details regarding each topic.\nThis post and all subsequent ones aim to provide a broad outline of the core areas to keep in mind when tackling the exam, linked to appropriate resources for more focused study. Your revision should, ideally, involve a high degree of hands-on testing and familiarity in working with the platform.\nWithout much further ado, let\u0026rsquo;s jump into the first topic area of the exam - how to Create a Technical Design, which has a 10-15% weighting and covers the following topics:\nValidate requirements and design technical architecture\ndesign and validate technical architecture design authentication and authorization strategy determine whether requirements can be met with out-of-the-box functionality determine when to use Logic Apps vs. Microsoft Flow determine when to use serverless computing vs. plug-ins determine when to build a virtual entity data source provider vs. when to use connectors Create a data model\ndesign a data model Power Platform Technical Architecture The application previously referred to as Dynamics CRM and now, broadly as Dynamics 365, has morphed and changed considerably from a customer relationship management system to a platform of independent, yet co-dependent applications, namely:\nPower Apps (previously known as PowerApps): These come in two flavours. Traditional Dynamics CRM developers will be most familiar with model-driven apps, which utilises much of the same customisation experience traditionally offered within Dynamics CRM. These type of apps are best suited for applications that are more data-driven and need to be run via a desktop web browser, although they do come with full support for mobile devices. Canvas apps, in comparison, are geared towards mobile-first scenarios, providing app developers with a high degree of freedom in designing their apps and deploying them to a wide variety of different devices or alongside other applications within the Power Platform. Power BI: A next-generation Business Intelligence (BI) tool, Power BI provides impressive data modelling, visualisation and deployment capabilities, that enable organisations to better understand data from their various business systems. Despite having its own set of tools and languages, traditional Excel power users should have little difficulty getting to grips with Power BI, thereby allowing them to migrate existing Excel-based reports across with ease. Power Automate (previously known as Microsoft Flow): As a tool designed to automate various business processes, Power Automate flows can trigger specific activities based on events from almost any application system. With near feature-parity with traditional Dynamics CRM workflows, Power Automate is a modern and flexible tool to meet various integration requirements. The Common Data Service (CDS): Adapted from the existing database platform utilised by Dynamics CRM, the CDS provides a \u0026ldquo;no-code\u0026rdquo; environment to create entities (i.e. Objects that store data records), develop forms, views and business rules, to name but a few. Within CDS, Microsoft has standardised the various entities within this to align with The Common Data Model, an open-source initiative that seeks to provide a standard definition of commonly used business entities, such as Account or Contact. The diagram below lazily stolen lovingly recycled from Microsoft illustrates all of these various applications and how they work together with other Microsoft services you may be familiar with:\nUnderstanding how these applications can work in tandem is critical when building out an all-encompassing business application. The examples below provide a flavour of how these applications can work together, but the full list would likely cover several pages:\nIncluding a Power Automate flow as part of a CDS solution, allowing you then deploy this out to multiple environments with ease. Being able to embed a Power BI tile or Dashboard with a personal dashboard setup in a model-driven app. Embedding a canvas-driven app into Power BI, allowing users to update data in real-time. Handling Security \u0026amp; Authentication Ensuring that critical business data is subject to reasonable and, where appropriate, elevated access privileges is typically an essential requirement as part of any central business system. The key benefit that the Power Platform brings to the table in this regard is that it uses one of the best identity management platforms available today - Azure Active Directory (AAD). Some of the benefits that AAD can bring to the table include:\nProviding a true single sign-on (SSO) experience across multiple 1st/3rd party applications, backed up by robust administrator controls and auditing capabilities. Allowing full support for user principal or security group level controls, via role-based access controls (RBAC). Access to a wide range of various security-minded features, such as Multi-Factor Authentication (MFA), risky sign-in controls and automatic password reset capabilities, should a user account or its associated password be detected as a potential risk. When it comes to managing security or access within your various Power Platform components, this will differ, based on which application you are working with:\nFor model-driven apps and CDS, you can leverage the existing capabilities within Dynamics CRM, such as Business Units or Security Roles, to provide a structured, hierarchical security model. This functionality can be extended further, via features such as field security profiles, thereby allowing you to secure specific entity fields in a variety of different ways. Security for canvas apps will typically be enforced via the data source you are connecting with - for example, users connecting to the Common Data Service will have any security role privileges applied automatically. For other applications, you may need to consult the relevant documentation and ensure, where possible, you are using SSO to simplify this process. Canvas app developers can share their applications to any other user on the tenant. Still, it\u0026rsquo;s important to emphasise what impact this will have with any associated app resources. Power Automate flows follow similar sharing principles to canvas apps, allowing you to create team flows that others in the organisation can interact with. Security for 3rd party applications is largely dictated in much the same manner as canvas apps. Finally, Power BI includes several features to help you manage access, such as Workspaces, Apps or by simply sharing your report/dashboard to another user. Most of these features are only available as part of a paid subscription. Again, the security/privileges of any underlying data source depends upon the account used to authenticate with the underlying data. For stringent scenarios, you may need to resort to Row-level security (RLS) to ensure data is restricted accordingly. Typically, a developer will want to design any application to use CDS as the underlying data source for the solution, as the security and record restriction features afforded here will more than likely be suitable for most situations.\nComparing Logic Apps to Microsoft Power Automate Flows Confusion can arise when figuring out what Azure Logic Apps are and how they relate to Power Automate. That\u0026rsquo;s because they are almost precisely the same; Power Automate uses Azure Logic Apps underneath the hood and, as such, contains most of the same functionality. Determining the best situation to use one over the other can, therefore, be a bit of a challenge. The list below summarises the pro/cons of each tool:\nAzure Logic Apps Enterprise-grade authoring, integration and development capabilities. Full support for Azure DevOps or Git source control integration. \u0026ldquo;Pay-as-you-go\u0026rdquo; - only pay for if and when your Logic App executes. Cannot be included in solutions. Must be managed separately in Microsoft Azure. Does not support Office 365 data loss prevention (DLP) policies Target Audience: Developers who are familiar with dissecting structured JSON definitions Power Automate Easy-to-use development experience Can be included within solutions and trigger based on specific events within CDS Supports the same connectors provided within Azure Logic Apps Difficult to configure alongside complex Application Lifecycle Management (ALM) processes. Fixed monthly subscription, with quotas/limits - may be more expensive compared to Logic Apps. Must be developed using the browser/mobile app - no option to modify underlying code definition. Target Audience: Office 365 power users or low/no-code developers In short, you should always start with Power Automate flows in the first instance. Consider migrating across to Logic Apps if your solution grows in complexity, your flow executes hundreds of time per hour, or you need to look at implementing more stringent ALM processes as part of your development cycles. Fortunately, Microsoft makes it really easy to migrate your Power Automate flows to a Logic Apps.\nComparing Serverless Computing to Plug-ins Serverless is one of those buzz words that gets thrown around a lot these days 🙂 But it is something worth considering, particularly in the context of Dynamics 365 / the Power Platform. With the recent changes around API limits as well, it also makes serverless computing - via the use of the Azure Service Bus - a potentially desirable option to reduce the number of API calls you are making within the application. The list below summarises the pro/cons of each route:\nServerless Compute Allows developers to build solutions using familiar tools, but leveraging the benefits of Azure. Not subject to any sandbox limitations for code execution. Not ideal when working with non-Azure based services/endpoints. Additional setup and code modifications required to implement. No guarantee of the order of execution for asynchronous plug-ins. Plug-ins Traditional, well-tested functionality, with excellent samples available. Works natively for both online/on-premise Dynamics 365 deployments. Reduces technical complexity of any solution, by ensuring it remains solely within Dynamics 365 / the CDS. Full exposure to any Dynamics 365/CDS transaction. Impractical for long-running transactions/code. Not scalable and subject to any platform performance/API restrictions. Restricts your ability to integrate with separate, line-of-business (LOB) applications. Comparing Virtual Entities to Connectors The core idea of having a system like Dynamics 365 or the CDS in place is to reduce the number of separate systems within an organisation and, therefore, any complex integration requirements. Unfortunately, this endeavour usually fails in practice and, as system developers, we must, therefore, contemplate two routes to bringing data into Dynamics 365/CDS:\nVirtual Entities: Available now for several years, this feature allows developers to \u0026ldquo;load\u0026rdquo; external data sources in their entirety and work with them as standard entities within a model-driven app. Provided that this external data source is accessible via an OData v4 endpoint, it can be hooked up to without issue. The critical restriction around this functionality is that all data will be in a read-only state once retrieved; it is, therefore, impossible to create, update or remove any records loaded within a virtual entity. Connectors (AKA Data Flows): A newer experience, available from within the PowerApps portal, this feature leverages the full capabilities provided by Power Query (M) to allow you to bring in data from a multitude of different sources. As part of this, developers can choose to create entities automatically, map data to existing entities and specify whether to import the data once or continually refresh it in the background. Because any records loaded are stored within a proper entity, there are no restrictions when it comes to working with the data. However, this route does require additional setup and familiarity with Power Query and is not bi-directional (i.e. any changes to records imported from SQL Server will not synchronise back across). Ultimately, the question you should ask yourself when determining which option to use is, Do I need the ability to create, update or delete records from my external system? If the answer is No, then consider using Virtual Entities.\nData Model Design Fundamentals A common pitfall for any software developer working with applications, like Dynamics 365, is to miss the bleeding obvious; namely, ignoring the wide variety of entities and features available to satisfy any required business logic, such as Business Rules or Power Automate flows. When designing and implementing any bespoke data model using the CDS, you should:\nCarefully review the entire list of entities listed as part of the Common Data Model. Determine, as part of this, whether an existing entity is available that captures all of the information types you need, can be customised to include additional, minor details or whether a brand new entity will be necessary instead. Consider the different types of Entity Ownership options and how this relates to your security model. For example, if you need the ability to restrict records to specific users or business unit, ensure that you configure the entity for User or team owned entity ownership. More details on these options can be found here. Review the differences between a standard and an Activity entity, and choose the correct option, based on your requirements. For example, when setting up an entity recording individual WhatsApp messages to customers, use the Activity entity type. Digest and fully understand the list of different field types available for creation. Ensure as part of this that you select the most appropriate data types for any new fields and factor in any potential reporting requirements as part of this. Understand the fundamental concepts around entity relationships. You should be able to tell the differences between 1:N and N:N relationships, including the differences between native and manual N:N relationships. Be familiar with using tools such as Microsoft Visio and, in particular, crow\u0026rsquo;s foot notation diagrams, to help plan and visualise your proposed data model. These tips provide just a flavour of some the things to consider when designing your Dynamics 365 / CDS data model. Future posts will dive into technical topics relating to this, which should ultimately factor back into your thinking when architecting a solution.\nHopefully, this first post has familiarised yourself with some of the core concepts around extending Dynamics 365 and Power Platform. I\u0026rsquo;ll be back soon with another post, which will take a look at some of the customisation topics you need to be aware of for Exam MB-400.\n","date":"2019-12-15T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/exam-mb-400-revision-notes-creating-a-technical-design-with-the-power-platform/","title":"Exam MB-400 Revision Notes: Creating a Technical Design with the Power Platform"},{"content":"The introduction of Power Apps Component Framework (PCF) controls provides developers with a potentially groundbreaking way of extending Dynamics 365\u0026rsquo;s model-driven app experience, in ways that are currently impossible to do straightforwardly via Web Resources or similar. Developed using Node Package Manager (NPM) and TypeScript, they allow us to replace existing fields and views within the application with bespoke components, that can achieve a wide variety of purposes. For example, you could develop a custom control that loads a customised date picker control, that enforces business logic within the application by ensuring that the date selected exists within the current fiscal year. Microsoft provides a wide variety of samples, and there is a whole gallery dedicated to showcasing examples the community has developed. In short, if you are still working with Web Resources and haven\u0026rsquo;t even taken a quick look at PCF controls, I would urge you to correct this oversight at the earliest possible opportunity. Just be prepared for an experience vastly different from the norm, using tools that traditional Dynamics 365/.NET Developers may have only heard about before on the grapevine.\nI\u0026rsquo;ve been doing some work with PCF controls recently. Well, I say work in the very broadest sense. I am fortunate to work alongside developers who are able to put amazing things together in lightning speed compared to me, with me then left to tinker around the peripheral output and, more often not, break things even further in the process 🙂 However, as part of this, I\u0026rsquo;ve gained a good grasp of some of the features and limitations involved when working with the Web API within a PCF control. Currently, developers can access the Dynamics 365\u0026rsquo;s/the Common Data Services Web API straightforwardly enough to:\nCreate a brand new entity record, by supplying an object containing the required record field values. Delete a record, by supplying the entity name and its globally unique identifier (GUID). Retrieve single or multiple records and, as part of this, defining an OData query to specify the fields to return, sorting rules and any filtering to apply. Update a single record, by supplying the entity name and its globally unique identifier (GUID). With the functionality listed above, pretty much any type of required business logic can be accommodated and, due to the tools involved, a good underlying knowledge of how to work with OData in the context of Dynamics 365 will be necessary. Particularly if you want to avoid potential errors like the one below when attempting to update a decimal field using the updateRecord Web API function:\nAn error occurred while validating input parameters: Microsoft.OData.ODataException: Cannot convert a value to target type \u0026lsquo;Edm.Decimal\u0026rsquo; because of conflict between input format string/number and parameter \u0026lsquo;IEEE754Compatible\u0026rsquo; false/true.\nIn this example, I was attempting to update an entity record based on a onEvent click call, where the value of a calendar event on the form was being grabbed and passed back to the application to update an entity record. The function to control this was defined as follows:\nprivate onEventClick(args: ClickEventArgs){ var editableTitle = $(this.calendar.renderEditableEventTitle(args.element, args.resourceId)); editableTitle.on(\u0026#34;change\u0026#34;, () =\u0026gt; { let newEntity : Entity = {}; newEntity[this.varEffortCol.name] = editableTitle.val(); this.context.webAPI.updateRecord(this.targetEntity, args.resourceId, newEntity); editableTitle.off(\u0026#34;change\u0026#34;); }) } The issue came down to the data type specification within TypeScript - namely, it was stored in a variable as a string, surrounded by double quotes, e.g. \u0026ldquo;90.0\u0026rdquo;. Although this is a valid number, it is still a string value, and the Web API prevents you from updating the system accordingly. While this is handy from a data management perspective, it does mean that some additional effort is required to ensure that a) the user is only able to enter a valid number in the first place and b) to perform the appropriate data conversion within the code. The first of these requirements can be achieved by simply ensuring that the input element created on the control is configured only to accept numbers:\nvar editElem = jElement.find(\u0026#34;input[type=number]\u0026#34;); if(editElem.length == 0){ editElem = $(document.createElement(\u0026#34;input\u0026#34;)); editElem.attr(\u0026#34;type\u0026#34;, \u0026#34;number\u0026#34;); } Then, it\u0026rsquo;s just a case of performing the appropriate data conversion at the right instance. So, we can rewrite the above function as follows:\nprivate onEventClick(args: ClickEventArgs){ var editableTitle = $(this.calendar.renderEditableEventTitle(args.element, args.resourceId)); editableTitle.on(\u0026#34;change\u0026#34;, () =\u0026gt; { let newEntity : Entity = {}; newEntity[this.varEffortCol.name] = Number(editableTitle.val()); this.context.webAPI.updateRecord(this.targetEntity, args.resourceId, newEntity); editableTitle.off(\u0026#34;change\u0026#34;); }) } Nice and easy - just the way I like it! 🙂\nPCF Controls provide an exciting new way of extending the interface of model-driven apps, often allowing you to make the application look completely unrecognisable from it\u0026rsquo;s out of the box appearance. And, unlike Web Resources, they are bound directly to existing fields/views within the application, providing a much more tightly integrated experience. However, with some of the controversial changes recently put in place relating to API limits, there is a real question mark as to how far you can utilise them, without you ending up with a huge bill in the process. Certainly, if your PCF control is doing on average 4-5 unique Web API calls every time a record is accessed, and your Dynamics 365 online deployment has hundreds/thousands of seats, this will likely breach any limits in place. Just because you can do something, doesn\u0026rsquo;t necessarily mean that you should ;) But, provided you can navigate through these waters successfully, the potential with PCF controls is truly astronomical and is only one that is going be improved further over time.\n","date":"2019-12-08T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dealing-with-cannot-convert-a-value-to-target-type-edm-decimal-error-in-power-apps-pcf-control/","title":"Dealing with 'Cannot convert a value to target type 'Edm.Decimal' Error in Power Apps PCF Control"},{"content":"Significant changes are happening with Microsoft Project at the moment. For a long time, this mammoth of a desktop application has seen very minimal modifications to the core experience. It\u0026rsquo;s one of those tools that project managers could use brilliantly. Or, if you are anything like me, sink hours into instead, as you figure out all of its nuances and unique features, compared to other products in the Microsoft Office suite. Now, in tandem with Microsoft Ignite earlier this year, Microsoft has announced that their new Project for the web experience is now generally available for customers worldwide. The primary selling point of this product is its entirely refreshed user interface and deep integration with the Power Platform, as outlined by Jared Spataro in the above announcement post:\nBuilt on the Microsoft Power Platform, Project enables you to quickly connect to the apps and services you already use, and to create custom desktop and mobile experiences to meet the specific needs of every project team. Easy to use tools make it simple to create automated workflow processes that streamlines compliance and increases efficiency.\nThe product also sets the groundwork for the eventual replacement of the current Project Service Automation (PSA) V3 experience with one identical to the new Project for the web experience, but deeply embedded within the Common Data Service (CDS) \u0026amp; Office 365 and fully extensible via tools such as Power Apps, Power Automate and Power BI.\nHaving had an opportunity to play about with the new app for a few weeks now, in this week\u0026rsquo;s blog post, I want to share my thoughts relating to it. In true wild west fashion, let\u0026rsquo;s dive in and see what\u0026rsquo;s good, bad and - frankly - borderline confusing and non-sensical - with this new product.\nThe Good Microsoft has succeeded in taking a beast of a desktop application and making it simple and easy to use within a web browser. Project managers can, therefore, focus more on building out their project plans instead of faffing around with multiple views, settings and other beastly configuration properties, that exist within the current desktop version of the app. The Roadmaps feature shows a lot of promise and, particularly if your organisation is managing your projects as part of programmes to PRINCE2 principles, act as a great way of being able to handle this engagingly. A much-touted benefit of the new solution is its native integration alongside the Common Data Service (CDS). In practice, this means that all Project data is stored within a CDS database, utilising some of the entities available within PSA. At the moment, Project for the web uses the following entities from PSA: Bookable Resource Organizational Unit Project Project Parameters Project Task Project Team Member Rating Model Rating Value Work template By leveraging existing functionality, it makes it a lot easier for those familiar with PSA to start working with the innards of Project for the web, with very little training required. The ability to connect to Azure DevOps projects within the tool is a significant boon for IT-focused organisations. To achieve this integration, users can get the app to deploy out a pre-built Power Automate flow. All you need to do is provide sign-in details, and the Project app will handle the rest for you! Alongside the Project app, Microsoft also provides a model-driven app that can be used to interact with the data, using an experience familiar to most Dynamics CRM / Dynamics 365 users. The functionality exposed here is relatively limited (more on this shortly), but it does provide a tried and tested way of working with Project data. The screenshot below provides an example of how this app looks when being used: The Bad Currently, although the new Project app is leveraging some PSA entities and features, there are some glaring omissions. For example, we are unable to configure skills or skill ratings for a resource at the time of writing this point. This deficiency is a minor irritant, which will likely disappear over time with updates/releases to the product. Resource management for the new app must take place within the model-driven app discussed above. It would be nice if we could do this from within the Project app itself. The transition route from working with the old-style Project Web App (powered by SharePoint) to the new experience appears, based on my research, to be a little bit unclear. For example, there is no precise mechanism to move any existing Projects across automatically so that they are stored in the CDS and exposed within the new Project app. Over time, we might see some tools released to help achieve this requirement but, unless I\u0026rsquo;m missing something, it looks like some manual effort is necessary to migrate your Projects across to the new experience. The ability to create Project groups - a collection of people involved in delivering your project - is a sensible idea in principle. However, as this functionality is reliant on Office 365 Groups under the hood, it could be open to abuse, leading to literally hundreds of different groups suddenly popping up on a tenant. Having the ability to control this will no doubt please Office 365 tenant administrators enormously. The Ugly Although the Azure DevOps integration is a nice touch, the fact that it creates a Power Automate flow within the default environment and that it is not subject to control via a solution could cause some difficulties when managing this in the long-term. It will also mean that a) any calls back to CDS will count against your daily/monthly API call limits and b) your monthly allocation of Power Automate flows. The Power Automate license type will determine the precise limits that will exist in practice. These facts could be better-signposted when setting up this integration for the first time. Perhaps I am just super stupid, but some features don\u0026rsquo;t appear to work for me at all currently. For example, I would expect that any Resources I create within the Project model-driven app to surface automatically within the Project for the web app. Based on my testing, this does not appear to be the case. Hopefully, there is just some property or configuration element that is causing this. Currently, the majority of support and training articles for the new Project experience are available solely on the Office 365 support website. Which is fine\u0026hellip;but considering that the majority of CDS and PSA documentation resides on the Microsoft Docs website, it does make it a little challenging to locate help and support for the product. The documentation also currently blends information from both the new and traditional experience, making it somewhat confusing. Hopefully, over time, the documentation will move across to the Microsoft Docs site, so it can benefit from features such as integration with GitHub to raise documentation issues directly with the product team. Although the new Project for the web sits on top of the CDS, you have no choice over which CDS environment that data is ultimately stored within. When configured, the application automatically installs the base solution to the individual (default) environment within your tenant. Users/administrators are not presented with an option (currently) to select one of their other environments to connect Project for the web up to. Data relating to all Projects, and their associated components, will, therefore, remain isolated in a separate database, with no apparent means of getting this data into your desired CDS database or to eventually migrate this all across in a straightforward way. For me, this represents one of the primary reasons why organisations currently using PSA or are interested in implementing it should stay far away from Project for the web, at least until the middle of next year. Verdict The new Project for the web application provides a unique window into the future evolution of Project and also PSA, as a modern, cloud-first application, that leverages the full benefits of the Power Platform underneath. However, I get the feeling that it\u0026rsquo;s not quite there yet. Perhaps there was a desire to get the product released as fast as possible, to ensure early feedback is received and factored into future development. There are no problems with this and, frankly, to see an organisation like Microsoft adopt this kind of \u0026ldquo;fail fast\u0026rdquo;, Agile approach is admirable. But, as mentioned already, there is a fundamental issue with the product if you sit in one of the following camps:\nYou are a current PSA user, setup with your own Dynamics 365 / CDS environment, interested in leveraging the new functionality available in Project for the Web. You are an organisation contemplating the adoption of PSA, but want to ensure you are using the new Project for the Web experience, to ensure you are operating a version that will be actively developed and supported in future. At this stage, my view is that it is impossible to recommend organisations fitting into these categories to start using Project for the Web. I believe the current setup of the solution from a CDS perspective is going to cause lots of issues from organisations who bite the bullet this early when we move into 2020. For now, by all means, spin up a trial and get a feel for the new experience, but don\u0026rsquo;t invest a lot of time just yet on starting to migrate fully across to it; particularly if you hope to fully leverage current PSA or other Power Platform functionality in depth.\nWhat are your thoughts on the new Project for the Web? Are you a non-stupid human, unlike myself and have been able to get some of the things mentioned in this post working successfully? Let me know your thoughts in the comments below!\n","date":"2019-12-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-new-project-for-the-web-the-good-the-bad-and-the-ugly/","title":"The New Project for the Web: The Good, the Bad and the Ugly"},{"content":"When implementing a business system within an enterprise organisation, there are typically several hoops that you have to jump through to assure stakeholders that the system meets all relevant Information Security (InfoSec) requirements. While this process can be often tortuous (I have many battle scars to prove this!), it is a necessary and worthwhile exercise to complete. By ensuring that any new system is developed using best practice approaches and has been configured prudently from a security standpoint will, ultimately, reduce business risk in the long-term. The critical battle around this is ensuring that security does not become an impediment to implementing a new, better business system; instead, you should ensure that all security concerns are addressed up-front as part of any architecture or design.\nWhile all of this sounds good for a bespoke developed system, challenges can emerge when implementing a public cloud system, such as Dynamics 365 Sales, Service etc. or its on-premise equivalent, Dynamics 365 Customer Engagement. When deploying these systems, we will have very little control and capability to put in place required security controls or to meet requirements such as performing an in-depth penetration test. Without the ability to meet these requirements or provide any relevant evidence, projects of this nature may fail to get through \u0026ldquo;the front door\u0026rdquo; initially and obtain formal sign-off as part of any change management procedure.\nMicrosoft, as a vendor, has typically been very proactive in ensuring that the platforms and business systems they offer via the public cloud can address some of the challenges raised so far. Perhaps less well-known is just where these resources can be found and precisely what resources are available to help validate that a system like Dynamics 365 is secure and compliant with various standards. As I discovered recently, the Service Trust Portal is the place to go for all of this, providing a cornucopia of documentation, certificates and reports to make any InfoSec professional squeal with joy. As part of this week\u0026rsquo;s blog post, I wanted to dive in and highlight some of the documents and resources available on this that may prove useful if you are trying to get a Dynamics 365 project off the ground.\nI Take No Credit for Finding This A few weeks ago, Rob Nightingale, a superstar Dynamics 365 community ninja, asked me whether I knew of any resources relating to penetration testing for Dynamics 365 Online. In typical CRM Chap fashion, I was utterly unable to advise. Rob pushed on and was able to find the Service Trust Portal and a lot of the links highlighted below. So all due credit goes to him for finding these resources - thanks, Rob!\nDiving Deeper into the Service Trust Portal\u0026hellip;Or Not My original intention with this post was to dive deep into some of the available documents on the portal, extrapolating some useful bite-size chunks to bat away any general InfoSec questions you may face around Dynamics 365. Unfortunately, though, access to any of the documents listed on this website is subject to accepting a Non-Disclosure Agreement (NDA). Anyone can browse the list of available materials on the site without agreeing to this, however. Therefore, to avoid any potential NDA entanglements, this post will provide a summarised list of the most pertinent documents that Dynamics 365 Customer Engagement professionals may be most interested in grabbing a copy of. All information and associated links are correct at the time of writing this post:\nD365 Security and Compliance Guide - This general guide provides an overview of how Dynamics 365 meets various security and compliance standards. This guide is perhaps your best first destination, before diving deeper into anything else. Dynamics 365 ISO 27001 Certificate - Validates that the product is compliant with the ISO 27001 Information Security Management standard. Dynamics 365 ISO 27018 Certificate - This certificate confirms that the product is compliant with the ISO 27018:2014 code of practice for protection of personally identifiable (PII) information types within a public cloud product. Dynamics 365 for Customer Engagement PCI DSS AoC - This Attestation of Compliance (AoC) document confirms that the product meets all the requirements concerning the Payment Card Industry Data Security Standard (PCI DSS), version 3.2.1 Dynamics 365 for Customer Engagement - Penetration Testing and Security Assessment 2019 - This document contains the results of Microsoft\u0026rsquo;s 2019 external penetration and security testing assessment for Dynamics 365 for Customer Engagement. Microsoft Azure and Dynamics ISO 22301 Certificate - Demonstrates that the product is compliant with the ISO 22301:2012 standard for business continuity management. Dynamics and, specifically, Microsoft Azure, also complies with various governmental security standards. Below is a sample list; I would urge you to check the website in closer detail if you are looking for something more catered to your locality or requirements:\nCyber Essentials + Certificate: This validates that Microsoft Azure has implemented the required cybersecurity controls to meet the needs of the UK governments Cyber Essentials + scheme. You can find out more about this scheme in my blog post on the subject. Microsoft Dynamics 365 CRM - ENS Certificate - This certificate confirms that the product is adequate when meeting the measures and controls defined within the Spanish National Security Framework (Esquema Nacional de Seguridad). Dynamics 365 (CRM Online) - IRAP Report on Compliance - This document confirms that the product is compliant with Australia\u0026rsquo;s Information Security Registered Assessors Program (IRAP) Plain English Translation: What does this all mean? With a lot of acronyms, numbers and complex terminology thrown about in this post, it can be difficult to translate all of this into something more understandable. At the risk of failing miserably, I will now try and provide some brief bullet points that summarise the security and compliance benefits of Dynamics 365 Online and Dynamics 365 Customer Engagement:\nAdequate controls and procedures are put in place to ensure the system appropriately protects PII types and sensitive cardholder details, with a range of features available in support of this objective, such as field security profiles. The system provides adequate safeguards and security, backed up by documented procedures that are subject to annual audits. As a system that is penetration tested annually, organisations can satisfy any InfoSec related concerns, as there is visible proof the system is routinely tested andy any underlying vulnerabilities addressed accordingly. The platform for Dynamics 365 Online (i.e. Azure) has the appropriate controls and procedures in place to ensure business continuity in the event of a disaster recovery scenario or similar. Regardless of the region or country where your Dynamics 365 online system resides, there is a high probability that Microsoft has adopted or implemented accreditations in line with any legislation or cybersecurity schemes within your locality. These steps, therefore, provides the necessary assurance that the system can be utilised the world over. All of the previous points can be validated and continually evaluated via an openly accessible platform, namely, the Service Trust Portal. Hopefully, this post has been useful in highlighting some valuable resources to ease you in any compliance pressures relating to Dynamics 365 online/Dynamics 365 Customer Engagement - ideally so that you can instead focus on implementing the system, to the benefit of the organisation in question\n","date":"2019-11-24T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-service-trust-portal-and-dynamics-365-a-helping-hand-to-meet-security-compliance-needs/","title":"The Service Trust Portal and Dynamics 365: A Helping Hand to Meet Security \u0026 Compliance Needs"},{"content":"I\u0026rsquo;ve blogged quite frequently in the past about exams and certifications for Microsoft Dynamics CRM and its successor product Dynamics 365. There\u0026rsquo;s been a lot of change and, for the most part, improvements in this space. Back in the days of Dynamics CRM 2016, there were up to 4 exams that you could get your teeth into; these days, this number has trebled and now covers a wide array of differing functionality across Dynamics 365. A noted absence amongst this list has been a dedicated exam devoted towards extending and developing bespoke solutions using Dynamics 365. The last such exam for this topic was released for Dynamics CRM 2013 nearly seven years ago. Since then, there has been no mechanism for developers to certify and validate their skills in developing custom plug-ins, Actions, working with the SDK Web API and, perhaps most importantly, working with new features such as the PowerApps Component Framework (PCF). These are all essential skill areas to have an awareness of when scoping and implementing a complex Dynamics 365 deployment.\nTherefore, after what feels like a small eternity, I was pleased to hear that a new developers exam is currently out in beta release. Exam MB-400: Microsoft Power Apps + Dynamics 365 Developer covers all of the areas you would hope for in a developer exam and, with MB-200 under your belt, will also grant you a shiny new Microsoft Certified Associate certification - excellent! I hope to sit MB-400 very soon, but, in the meantime, let\u0026rsquo;s take a closer look at the areas covered by the new exam. I\u0026rsquo;ll also highlight and suggest some recommendations, that may be of assistance if you are preparing to sit the exam yourself.\nReviewing the Skills Measured List As is typical for a Microsoft exam, a wide variety of subject areas are tested and are granted different weightings when totalling your final score. For this exam, the skills measured list is as follows:\nCreate a Technical Design (10-15%) Validate requirements and design technical architecture\ndesign and validate technical architecture design authentication and authorization strategy determine whether requirements can be met with out-of-the-box functionality determine when to use Logic Apps vs. Microsoft Flow determine when to use serverless computing vs. plug-ins determine when to build a virtual entity data source provider vs. when to use connectors Create a data model\ndesign a data model Configure Common Data Service (CDS) (15-20%) Configure security to support development\ntroubleshoot operational security issues create or update security roles and field-level security profiles Implement entities and fields\nconfigure entities configure fields configure relationships Create and maintain solutions\nconfigure solutions import and export solutions manage solution dependencies Create and Configure PowerApps (10-15%) Create model-driven apps\nconfigure a model-driven app configure forms configure views configure visualizations Create Canvas Apps\nconfigure a Canvas App develop complex expressions Configure business process automation (10-15%) Configure Microsoft Flow\nconfigure a Flow configure actions to use CDS connectors develop complex expressions Implement processes\ncreate and configure business process flows create and configure business rules Extend the user experience (15-20%) Apply business logic using client scripting\nconfigure supporting components create JavaScript or Typescript code register an event handler use the Web API from client scripting Create a PowerApps Component Framework (PCF) component\ninitialize a new PCF component configure a PCF component manifest implement the component interfaces package, deploy, and consume the component use Web API device capabilities and other component framework services Create a command button function\ncreate the command function design command button triggers, rules, and actions edit the command bar using the Ribbon Workbench modify the form JavaScript library dependencies Extend the platform (15-20%) Create a plug-in\ndebug and troubleshoot a plug-in develop a plug-in use the Organization Service optimize plug-ins for performance register custom assemblies by using the Plug-in Registration Tool create custom actions Configure custom connectors for PowerApps and Flow\ncreate a definition for the API configure API security use policy templates Use platform APIs\ninteract with data and processes using the Web API optimize for performance, concurrency, transactions, and batching perform discovery using the Web API perform entity metadata operations with the Web API use OAuth with the platform APIs Develop Integrations (10-15%) Publish and consume events\npublish an event by using the API publish an event by using the Plug-in Registration Tool register a webhook create an Azure event listener application Implement data synchronization\nconfigure and use entity change tracking configure the data export service to integrate with Azure SQL Database create and use alternate keys So as you can see, a lot to consume and potentially learn about, even for the most seasoned of Dynamics CRM/365 professionals.\nGeneral Recommendations The exam is testing developers on their understanding of, what I would traditionally class as, core customisation topics - primarily when it comes to simple entity customisations, the systems security model and also working with solutions. It is imperative, therefore, not to neglect these areas as part of your learning. Also, because these topics are underneath the Configure Common Data Service (CDS) header, I would assume that familiarity with the new customisation experience will be mandatory in securing a passing grade. A general refresher in these areas would not go amiss. Both canvas and model-driven Power Apps are subject areas tested in this exam, with an additional focus on being familiar with the broad range of functions available within a canvas app. Likewise, Power Automate flows (AKA Microsoft Flow) is a subject area that you will need to dive into, alongside its distinct expression language. Understanding the differences and usage cases between a Power Automate flow and Logic Apps will be crucial for this exam. As a general rule of thumb, Logic Apps caters for more complex integration needs, is billable based on actual usage, and allows you to leverage Software Development Lifecycle Management (SLDC) / Application Lifecycle Management (ALM) features much straightforwardly. For the first time for a developer exam, knowledge of TypeScript is now a requirement; primarily in the context of developing form functions and also in creating a PCF control. While TypeScript is very much C#/JScript like in its general structure, getting your head around the fundamentals and - more crucially - TypeScript\u0026rsquo;s differences between our C#/JScript may be a challenge. This next one has to be a first for a Microsoft exam for me - it now looks as if knowledge of community tools and, specifically, Scott Durow\u0026rsquo;s Ribbon Workbench is now a mandatory requirement to pass the exam! 🙂 I\u0026rsquo;m relatively sure most developers will have used this tool at some stage during their travels; if not, then going over the basics of this tool wouldn\u0026rsquo;t go amiss I\u0026rsquo;d say. A lot of the core focus areas for this exam are in well-versed areas for CRM developers, such as plug-ins, the Web API and custom actions. While this may reduce any revision pressures, I would caution against complete complacency. Things are changing all the time with the SDK, meaning that regular refreshers are always needed. For example, were you aware of the deprecation of the Xrm.Page object, used primarily within JScript form functions? What about the new API limits that are being gradually rolled out? Or, finally, how about the fact that the SDK is now available within NuGet exclusively? Complacency is never an acceptable excuse when working with a cloud application system, so make sure you put the appropriate steps in place to revise any changes made to the SDK within the past 18 months and to keep abreast of any new changes in future. Taking the previous point as read, one notable exception from the familiar list is Web Resources. While these are still readily available to use as a solution, their use is becoming less needed, as canvas Power Apps, for example, start to fill the void in their stead. Their omission from the above Skills Measured list though, would suggest they are a subject not worthy enough for exam revision. The integrations area of the exam is very much testing your ability to bring Microsoft Azure into the equation. Somewhat surprisingly, amongst this, is a requirement to know all about the Data Export Service too, a topic area that has been covered on the blog previously. Again, this emphasises the importance of having broad familiarity with the overall Microsoft \u0026ldquo;stack\u0026rdquo; and how you can slot this into your Dynamics 365/Power Platform solution. Remember, if you are sitting the exam while it is still in beta, you will not get your results immediately after completing the exam. You will likely need to wait a few months until MB-400 has come out of beta before receiving your pass/fail notification. Ultimately, the exam is very much focused towards the Power Platform as an entire solution, as opposed to merely Dynamics 365 (as a model-driven app) and its corresponding database (CDS) as a solution in of itself. If you have so far not looked seriously at canvas Power Apps or Flow, this exam gives you an excellent opportunity to dive into areas that are only going to become more critical in the years ahead.\nAs mentioned earlier, I plan to sit this exam within the next few months, so be sure to keep an eye on the blog for any follow-up posts relating to MB-400. In the meantime, what are your thoughts on MB-400? Does it sound too harsh or a cakewalk compared to MB2-701? What\u0026rsquo;s your approach to studying for a new exam like this? I would love to hear your thoughts in the comments below!\n","date":"2019-11-17T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-return-of-the-developer-exam-evaluating-microsoft-exam-mb-400/","title":"The Return of the Developer Exam: Evaluating Microsoft Exam MB-400"},{"content":"Typically, the most costly - and frankly most misunderstood - part of deploying a website these days involves the setting up of a Secure Socket Layer/Transport Layer Security (SSL/TLS) certificate for your website. Previously, you would want to deploy these out if your site involves any form of secure authentication or the entering of personal data. This requirement is necessary because a certificate verifies that the website is real, has not been compromised or has a dubious origin. These days, for anyone chasing search engine rankings or wanting to get rid of any pesky Not Secure messages within your web browser window, getting a full grasp on procuring an SSL/TLS certificate becomes a mandatory requirement in ensuring your website receives appropriate attention via the globes largest search engine. The problem is, though, that these certificates cost money - anywhere up to hundreds of £\u0026rsquo;s in some cases. For situations where you need to suppress any Not Secure messages within your browser window, this seems like an unnecessary cost for something that should, in my view, come as standard when hosting your website.\nWith all this in mind, I was pleased to find out this week about a new preview feature for App Service plans - the ability to set up a managed TLS certificate, at no extra charge. Long overdue in many people\u0026rsquo;s minds, I am sure, this represents a positive step forward in allowing customers to reduce their costs and meet their limited objectives when maintaining a commercial website. It\u0026rsquo;s also effortless to set up, which ticks a massive box in my book.\nAs the above article provides excellent setup instructions for this new feature, I wanted the focus of today\u0026rsquo;s post to be on highlighting the key points relating to this feature, so you can quickly analyse whether this is something that you wish to put in place for your current Azure websites Any certificates generated will be valid for six months at the time of issue. Microsoft will automatically renew and apply a new version of the certificate when it is close to expiring.\nDigicert issues all certificates and they are secured using a SHA265 signature hash algorithm.\nMicrosoft provides no environment restrictions for its usage; this, therefore, means you can freely set a managed certificate up for your development, testing and production sites.\nThis feature does not support wildcard certificates or naked domains (e.g. www.mydomain.com or test.mydomain.com are allowed, but mydomain.com is not).\nMicrosoft does not automatically bind the certificate to your domain after being generated; you must still do this manually.\nThe feature is not available on the Free or Shared tier plans.\nYou cannot export the certificate after generating.\nAt the time of writing, there does not appear to be any PowerShell cmdlets or Resource Manager template samples available relating to this feature. Expect these to be available in due course.\nThe feature is only compatible with custom domains that have been configured using a CNAME record. Attempting to secure a domain setup via an A record will produce the following error message:\nRemember as part of all this that the feature is in preview. It may be subject to change or removal. Therefore, I would caution against its use within production environments.\nConclusions or Wot I Think Providing potential customers of your organisation assurance that your website is secure when, for example, they are submitting their contact details to you is of paramount importance. Unfortunately, Google\u0026rsquo;s and other web providers enforcement of more stricter requirements in this area, while laudable, has I think caught a lot of organisations out and also introduced unnecessary costs and confusion into the mix. Clearly, if you are a huge multi-national corporation processing financial transactions, then a more expensive SSL/TLS certificate will be desirable; but less so if you are a small company with a basic contact form on your home page. The fact that Microsoft is now following the lead of other hosting providers, in giving their customers a free, \u0026ldquo;no-strings-attached\u0026rdquo; TLS certificate, is a positive step forward and an important step towards securing all websites on the web today. Also, organisations can then leverage this for the benefit of their customers, internet search rankings and - perhaps most importantly - their yearly IT budgets. 🙂\n","date":"2019-11-10T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/overview-of-microsoft-azure-app-service-managed-certificates/","title":"Overview of Microsoft Azure App Service Managed Certificates"},{"content":"Persistent readers of the blog may recall I did a post recently, where I outlined the only available route to migrate Microsoft Azure Cloud Solutions Provider (CSP) subscriptions from one Azure Active Directory (AAD) tenant to another. The way described in the post is perhaps not the most ideal, mainly due to the number of steps involved, but it is nonetheless a viable route that \u0026ldquo;works\u0026rdquo; - I have the battle scars to prove it. 😅 This week, I wanted to expand upon this further by highlighting the sort of things to watch out and plan for as part of a multi-tenant CSP subscription migration.\nNot all Azure resources support migration. Period. This statement is one of the sad facts about any Azure resource migration, whether it\u0026rsquo;s to another subscription in the same tenant or to somewhere else further afield; some resources just can\u0026rsquo;t be moved at all. Microsoft maintains a full list of all resources, indicating their appropriate status, which is updated regularly. In my own experience, the main issue I had was around Metric Alerts (activitylogalerts) and Azure AD Domain Services resources. The first of these is more of a minor inconvenience, given how you can straightforwardly recreate these. However, the second example is perhaps far more debilitating, depending on the complexity of your network. Be sure to consult this list closely for each resource you are hoping to move, to flag up any issues along the way.\nPrepare for some Azure Key Vault Related Troubles Azure Key Vault is a fantastic solution to consider if you need to manage secure credentials across multiple environments and grant permissions to managed identity principles within your AAD tenant. It\u0026rsquo;s also a requirement when working with the Dynamics 365 Customer Engagement Data Export Service. For most migration scenarios, you\u0026rsquo;ll be pleased to hear that there are no barriers relating to this, thereby allowing you to move this resource and all related secrets, certificates etc. with no issues. However, you will need to run the following PowerShell script, using the Az module, after migration, to ensure that the Key Vault resource is bound to the new AAD tenant successfully:\n#Connect and login to Azure Connect-AzAccount #Update the below values for your own environment $kvName = \u0026#34;my-keyvault\u0026#34; #Name of your Key Vault resource $subscriptionID = \u0026#34;7779a36e-36bb-44d8-bfcc-24d54536999f\u0026#34; #GUID of the subscription where the Key Vault resource resides #Target the correct subscription Select-AzSubscription -SubscriptionId $subscriptionID # Get your key vault\u0026#39;s Resource ID $vaultResourceId = (Get-AzKeyVault -VaultName $kvName).ResourceId # Get the properties for your key vault $vault = Get-AzResource –ResourceId $vaultResourceId -ExpandProperties # Change the Tenant that your key vault resides in $vault.Properties.TenantId = (Get-AzContext).Tenant.TenantId # Access policies can be updated with real applications/users/rights so that it does not need to be done after this whole activity. # Here we are not setting any access policies. $vault.Properties.AccessPolicies = @() # Modifies the key vault\u0026#39;s properties. Set-AzResource -ResourceId $vaultResourceId -Properties $vault.Properties Failing to do this will result in you having no permissions whatsoever to access the Key Vault properties.\nAn additional thing to point out is that there is a hard restriction on moving any Key Vault resource configured for disk encryption for an Azure Virtual Machine (VM). In this particular scenario, the only recourse is to disable any encryption, delete the Key vault resource, move your VM to its new tenant/subscription and then proceed to recreate anything. This sequence will take considerably longer than running the above PowerShell script and would require thorough testing to achieve successfully.\nFor Main Course, how about a portion of Azure Data Factory Fiddling? Similar to Azure Key Vaults, Azure Data Factory can be configured to bind with any underlying AAD tenant closely it is associated with, via a managed identity. A key benefit of this is then allowing you to grant privileges for the data factory to access other resources on the tenant such as, for example, an Azure Key vault secret for a database connection string. As a managed identity is an object existing at the AAD tenant level, this is not factored in as part of any resource migration and is something that requires recreating post-migration. The quickest way of doing this is by running the following PowerShell script, which again uses the Az module:\n#Connect and login to Azure Connect-AzAccount #Update the below values for your own environment $rgName = \u0026#34;my-rg\u0026#34; #Name of the ADF\u0026#39;s resource group $dfName = \u0026#34;my-adf\u0026#34; #Name of the ADF resource $dfLocation = \u0026#34;UK South\u0026#34; #Location of the ADF resource $subscriptionID = \u0026#34;7779a36e-36bb-44d8-bfcc-24d54536999f\u0026#34; #GUID of the subscription where the Key Vault resource resides #Target the correct subscription Select-AzSubscription -SubscriptionId $subscriptionID #Recreate the managed identity via the Set-AzDataFactoryV2 cmdlet Set-AzDataFactoryV2 -ResourceGroupName $rgName -Name $dfName -Location $dfLocation Then, once the managed identity exists on the tenant, you can then proceed to re-assign any privileges that were also present in the old tenant.\nSQL Server AAD Admin Strangeness Another feature which seeks to blend a given Azure resource more closely with an AAD tenant is the ability to access your Azure SQL databases via an AAD login. Having this in place can help significantly in simplifying login procedures, enforcing security and reducing the risk of a potential data breach. A necessary step to setting this up is assigning a single user account in the tenant with global administrator privileges across the SQL Server. Once defined, you can then use this account to start setting up your accounts or, indeed, a single security group containing all of the accounts and shared privileges required.\nA strange issue occurs though when moving SQL Server resources that have been configured as described above. Following any migration, any defined Azure SQL administrator will remain configured at the instance level, instead of being removed entirely. While this has the benefit of ensuring that any existing accounts set up on the old tenant still work as expected, you will still need to replace this with a new AAD admin account in the new tenant. If you are also using Security groups to manage access, then an additional requirement will involve a) ensuring that this security group exists in the new tenant and b) dropping \u0026amp; creating the database account manually. The second step is a definite requirement to prevent any potential login issues, as I assume the binding between SQL Server account and AAD Security group is done based on the GUID of the object on the AAD tenant.\nStill not using Azure Templates? Why the hell not?!? Perhaps the greatest boon you can give yourself before starting any migration process is first to extract and define the Azure resources you intend to move, into an Azure Resource Manager Template. Indeed, in the case of recreating any Metric Alerts or in sorting out some of the fiddly problems mentioned with Key Vault/Data Factory, an Azure template deployment would solve all of this in a pinch. Also, you can begin to more easily leverage the benefits that solutions like Azure DevOps can offer your organisation, by allowing you to collaborate over template deployments, automate the validation \u0026amp; testing of any templates and ensure that these are pushed out continuously into your production environments.\nHopefully, some of these pointers will prove useful to those contemplating a similar type of migration. If anyone has any further tips or questions, feel free to leave a comment below!\n","date":"2019-11-03T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/lessons-learned-from-migrating-azure-csp-subscriptions-across-multiple-tenants/","title":"Lessons Learned from Migrating Azure CSP Subscriptions Across Multiple Tenants"},{"content":"In today\u0026rsquo;s busy and frantic world, organisations should welcome any solution that can reduce manual data entry and improve accuracy. These concerns are perhaps the most prudent areas that Artifical Intelligence (AI) solutions can address. If utilised correctly, they can take most of the work out of menial tasks, allowing individuals to instead focus their effort on evaluating the output from an AI process, fixing any errors where appropriate. Regardless of the type of AI solution implemented, a time saving will occur and, as we all know, time is money. 🙂 Another common challenge in any workplace today is in getting colleagues and partners to engage with a business system. This objective is especially true for salespeople, who are typically more guarded than others when using an application to store information about the types of deals or customers they work with daily. Whether this is because they are afraid of sharing their secrets or client base with others or through a lack of tailored training can be challenging to say. Still, regardless, it presents an uphill challenge for anyone wanting to implement a Customer Relationship Management (CRM) system.\nThere are lots of exciting things happening within the Dynamics 365 Sales space at the moment, and one feature that seeks to aid in the challenges outlined earlier is the capability to scan business cards and auto-populate records from any scanned image. And, surprisingly, it works really well. In today\u0026rsquo;s post, I wanted to dive a little deeper into this new feature, outlining the things you need to get started with it, some details regarding its setup and also a few thoughts and observations relating to my experience with it so far.\nPre-Requisites Getting started with the Business Card Scanner control is relatively easy, but there are a few things you need to have in place first:\nYour environment must be on the latest release - that is, the 2019 release wave 2, which landed earlier this month. The feature is only currently available for tenants in the North America or EMEA regions. You can determine this by reviewing the URL for your organisation - if it is in the format https://myorg.crm.dynamics.com or https://myorg.crm4.dynamics.com, then you are good to go! You must enable the AI Builder preview models feature on your tenant. You can locate this option by navigating into the Power Platform Admin center (preview), browsing to the Settings -\u0026gt; Features area and ensuring you tick the appropriate option: Finally, users who wish to take advantage of the feature must be assigned the Common Data Service User security role or a role with equivalent privileges. Hey\u0026hellip;where is my Business Card Scanner control?!? By default, the Business Card Scanner control appears on the Contact, and Lead Quick Create Forms, with no intervention required. An initial hurdle to overcome around this, specifically for situations when you are upgrading to the 2019 release wave 2, is that the control does not appear to load at all if you meet all of the pre-requisites:\nThis is because, by the default, the scanner sets itself to a non-visible state. System customisers will, therefore, need to go onto the appropriate forms within the classic interface and set it to visible for it to start rendering correctly:\nUnder the Hood Behind the scenes, there exist three components that power the scanner:\nThe solution uses two fields: Business Card (businesscard): By default, this field stores the scanned business card image, as a base64 string. BusinessCardAttributes (businesscardattributes): This field stores properties relating to the control. By default, it is left empty. Finally, the AI Builder Business Card control, a PowerApps Component Framework (PCF) control, is the bit that renders the scanner in the first place. It can be customised to map any of the following, detected field types to any possible field on the entity: Full Name First Name Last Name Job Title Mobile Phone Business Phone Fax Email Company Name Website Department Full Address Address City Address Postal Code Address Country You can also specify the field location of where to store the scanned image\u0026rsquo;s base64 string, the default image that is loaded and informational text that appears on the control for end-users. Something to flag here is the fact that the solution chooses to store image data by default. This circumstance may be something you review or, at the very least, have a process in place to manage in the longterm so that you can keep your online storage costs down. For example, you could have a Microsoft Flow that blanks out the image field 60 days after the record is created or, instead, saves the image out into a SharePoint or Azure Blob Storage location.\nTeething Problems A strange issue I came across when running the scanner the first time was when I received a No capacity available, please check with your administrator error. Fortunately, there is a way around this for now, which involves setting up a free trial to the premium AI Builder features for your tenant. The following post on the Dynamics Communities site outlines the issue and solution in detail. Fortunately, it looks like Microsoft are targeting a fix for this over this weekend (26th/28th October 2019), so hopefully, this will not remain a problem for much longer.\nClosing Thoughts It\u0026rsquo;s fair to say that the functionality on display here is impressive and, in most of the scenarios I have tested, the scanner can very ably get the job done. Errors that occur are rare and generally caused by situations where the business card layout is unusual. That\u0026rsquo;s not to say that are some drawbacks with all of this. The primary one for me, at the time of writing, is the region restrictions. Not being able to start using this impressive functionality with UK tenants is a crucial concern and one which, I hope, is addressed soon. I noticed as well, particularly on cellular connections, that it takes the system some time to process any business card it receives. This slowdown is understandable, given that it needs to both upload the photo in the first place and then send off the information to an external web service. My main worry with this, though, is that if it takes longer for you to achieve all of this compared with just entering the data manually, then it defeats the whole point of the solution in the first place. Finally, it would also be nice to see the underlying API exposed out for access, as part of the SDK, as I can imagine many situations where it will be necessary to scan many dozens or hundreds of cards at once. In closing, though, any solution that can encourage a salesperson to use the system make a salespersons job more comfortable should be openly embraced. Also, for me at least, this feature represents an excellent application of AI in addressing a business need and in helping to ensure consistency/accuracy when capturing relevant sales prospect data.\nWhat are your thoughts on the new Business Card Scanner feature? Any tips or tricks you feel like sharing? Feel free to leave a comment below if so!\n","date":"2019-10-27T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-started-with-the-dynamics-365-sales-business-card-scanner/","title":"Getting Started with the Dynamics 365 Sales Business Card Scanner"},{"content":"After a long hiatus, I\u0026rsquo;m pleased to be back with a brand new tutorial video for Dynamics 365 Customer Engagement! This time, we\u0026rsquo;ll take a closer look at how to set up a custom pricing solution within the application, using a C# plug-in. For organisations who are looking to tailor out of the box pricing for Quotes, Opportunities, Orders and Invoices, a custom pricing plug-in provides a straightforward and powerful mechanism to achieve this requirement. See how easy it is to get started with them by watching the video below:\nThis is an accompanying blog post to the video, which includes a few relevant links and further reading topic areas that may be of interest.\nGitHub All code samples and the presentation slides from the video can be found on my GitHub page.\nUseful Links Visual Studio 2019 Community Edition Download\nSetup a free 30 day trial of Dynamics 365 Customer Engagement\nC# Guide (Microsoft Docs)\nSource Code Management Solutions\nAzure DevOps - Free for up to 5 users and my recommended choice when working with Dynamics 365 Customer Engagement BitBucket GitHub Microsoft Docs Tutorials/Articles\nUse custom pricing for products Sample: Calculate Price plug-in Power Platform Requests limits and allocations Blog Posts\nA Few Observations on Using Custom Pricing Plugins Alongside Project Service Automation Automatically Populate Extended Amount Field When Using Custom Pricing (Dynamics CRM/365 for Enterprise) Implementing Custom Calculations for Sales Entities (Dynamics CRM/Dynamics 365 for Enterprise) Mapping Product Attributes to Quote/Order/Invoice Line Items (Dynamics 365 Customer Engagement) Want to learn more about Dynamics 365 Customer Engagement? Check out my previous videos using the links below:\nDynamics 365 Customer Engagement Deep Dive: Creating a Basic Custom Workflow Assembly Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Plug-in Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function Have a question or an issue when working through the code samples? Leave a comment below or contact me directly, and I will do my best to help. Thanks for reading and watching!\n","date":"2019-10-20T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/implementing-a-custom-pricing-engine-within-dynamics-365-customer-engagement/","title":"Implementing a Custom Pricing Engine Within Dynamics 365 Customer Engagement"},{"content":"Azure Resource Manager templates have been a regularly discussed feature on the blog, perhaps reflecting the experiences and issues I have had working with them over the past year. For example, we\u0026rsquo;ve seen:\nHow to perform pre-deployment validation of your templates, to ensure they are valid and any issues are resolved as part of your continuous integration (CI) process. Some of the pitfalls involved when working with application settings and App Service resources, and why you need to factor in the former when authoring your template. The steps required to specify Streaming Units for a Stream Analytics Job resource. Despite hurdles along the way, I am a huge proponent of their usage, and this is a subject area I would strongly encourage anyone working with Azure to study carefully. Indeed, if you have a desire to standardise and automate your development cycles, then Azure Resource Templates will - arguably - be essential towards achieving this.\nAs another week began, I found myself having an incredible feeling of déjà vu, with another pesky error message to deal with on my desk. This time, the error was relating to an Azure Resource Manager template deployment and, specifically, the following error message, involving an Azure SQL Server instance configured with a failover group:\n{ \u0026#34;id\u0026#34;: \u0026#34;/subscriptions/9466fa60-58e2-4d62-bd5c-290b2707ca35/resourceGroups/my-rg/providers/Microsoft.Resources/deployments/azuredeploy/operations/ABB4A5948E69802B\u0026#34;, \u0026#34;operationId\u0026#34;: \u0026#34;ABB4A5948E69802B\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;provisioningOperation\u0026#34;: \u0026#34;Create\u0026#34;, \u0026#34;provisioningState\u0026#34;: \u0026#34;Failed\u0026#34;, \u0026#34;duration\u0026#34;: \u0026#34;PT0.1417258S\u0026#34;, \u0026#34;trackingId\u0026#34;: \u0026#34;31c4ff19-3064-42ca-bce5-e212459faa75\u0026#34;, \u0026#34;statusCode\u0026#34;: \u0026#34;BadRequest\u0026#34;, \u0026#34;statusMessage\u0026#34;: { \u0026#34;error\u0026#34;: { \u0026#34;code\u0026#34;: \u0026#34;FailoverGroupCreateOrUpdateRequestReadOnlyPropertyModified\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;The create or update failover group request body should not modify the read-only property \u0026#39;properties.partnerServers\u0026#39;.\u0026#34; } }, \u0026#34;targetResource\u0026#34;: { \u0026#34;id\u0026#34;: \u0026#34;/subscriptions/9466fa60-58e2-4d62-bd5c-290b2707ca35/resourceGroups/my-rg/providers/Microsoft.Sql/servers/my-sqlserv/failoverGroups/my-sqlservfg\u0026#34;, \u0026#34;resourceType\u0026#34;: \u0026#34;Microsoft.Sql/servers/failoverGroups\u0026#34;, \u0026#34;resourceName\u0026#34;: \u0026#34;my-sqlserv/my-sqlservfg\u0026#34; } } } A bit of background may be useful at this stage - I had just finished moving the failover group resource, and its associated components, from another subscription. As part of this, some resource types unsupported for resource move (such as Alerts) had to be deleted and recreated using an already authored Resource Manager template. This error, therefore, surfaced during this redeployment, as Azure plainly had some issues when it came to validating and updating the already existing failover group.\nI recalled having similar issues with failover groups earlier in the year, under similar circumstances. I also remembered that earlier resource moves across subscriptions sometimes could cause problems, particularly when it comes to underlying Resource ID\u0026rsquo;s and the like. Although this wasn\u0026rsquo;t the case upon inspection with the Azure Resource Explorer (a must-have tool for this purpose), I did observe attempts to modify the failover group would also fail. For example, attempts to increase the grace period for data loss to two hours resulted in a similar error to the one above. The only logical conclusion was that something went wrong during the subscription migration. This state of affairs left only a single, nuclear option left at my disposal - to delete and then re-create the failover group via a new template deployment. With fingers and toes crossed throughout, I was relieved to see that this worked as expected.\nNow, perhaps very evidently, taking such a drastic step will cause some element of disruption. While deleting a failover group does not pause any database replication settings, any applications interacting the database via the failover group endpoint will be disrupted. The DNS de-registration on Microsoft\u0026rsquo;s side may also take some time to complete, meaning that any redeployment may have to wait. In my case, I was able to redeploy within 5-10 minutes of removing the failover group, which was not too bad. There is also the risk that someone else may sneak in and grab the failover group DNS record within this short window. What I am getting at is this - think carefully about the impact that the deletion will have on your broader infrastructure and, ideally, schedule any removal during an approved maintenance window.\nAzure subscription migrations can be a bit of a hassle to complete, as this example clearly shows. This circumstance reinforces, in my view, the importance of having Azure Resource Manager templates in place for all aspects of your Azure estate. They allow you to quickly deploy your Azure environment out for any purpose, while also ensuring resource groups contain the expected components for your solution. They also cover scenarios like subscription migrations, where, regrettably, not all types of resources can be lifted and shifted to a new location. If you\u0026rsquo;re still in two minds, get onto the Azure portal, export a template from each of your resource groups and experiment further from there. I am sure that, after a short period, you will begin discovering the multitude of benefits they can deliver.\n","date":"2019-10-13T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/dealing-with-failovergroupcreateorupdaterequestreadonlypropertymodified-errors-in-a-azure-resource-manager-template-deployments/","title":"Dealing with \"FailoverGroupCreateOrUpdateRequestReadOnlyPropertyModified\" Errors in a Azure Resource Manager Template Deployments"},{"content":"Another week, another error message to contend with! Typically, there is no way of predicting how a full diagnosis and resolution of a troublesome fault will take. Often, if you don\u0026rsquo;t follow the mantra of keeping things sample, you might end up spending many fruitless hours resolving an ultimately simple issue. In other cases, you may need to rely on inferred knowledge from other areas to assist you on your way.\nThis last scenario was undoubtedly the case earlier this week when I was working with Azure Data Factory V2. I was attempting to generate a Resource Manager template for my completed data factory project. Once created, this can then be used as part of an Azure DevOps Pipeline to deploy the data factory out into different environments. The project in question was a series of data movement tasks between two Azure SQL Server databases. The process behind generating Resource Manager templates is usually pretty straightforward:\nFirst of all, make sure you have associated an Azure DevOps/GitHub repository to your data factory. After authoring your data factory and moving into your master branch, select the Publish All button. This action will simultaneously deploy out all of your data factory resources to the data factory itself and also generate the Azure deployment scripts into a separate branch, called adf_publish. As you can see, a pretty straightforward process, which I like! Unfortunately, the other day, this was not quite working as expected for me, and I was getting the following message instead:\nMessage: Unable to process template language expressions for resource \u0026lsquo;/subscriptions/229c164c-0f4e-4a2d-90bc-8264758a3e0a/resourceGroups/my-rg/providers/Microsoft.DataFactory/factories/my-adf/datasets/MyTable\u0026rsquo; at line \u0026lsquo;1\u0026rsquo; and column \u0026lsquo;37433\u0026rsquo;. \u0026lsquo;Unable to parse language expression \u0026lsquo;dbo].[MyTable\u0026rsquo;: expected token \u0026lsquo;LeftParenthesis\u0026rsquo; and actual \u0026lsquo;RightSquareBracket\u0026rsquo;.\u0026rsquo;\nRather annoying! Now, as mentioned earlier, sometimes you can guess a resolution to a problem by bringing knowledge in from other areas. In this case, because the error mentions square brackets, I knew that square brackets were reserved characters for arrays within JSON - the underlying language for an Azure Resource Manager template. This restriction, therefore, could cause problems if a square bracket is misused. Upon closer inspection of my factory\u0026rsquo;s data sources, I noticed that they contained square brackets for each object name, e.g. [dbo].[MyTable]. Now, when working with Transact-SQL, the use of square brackets is widespread and, in most cases, recommended; particularly if the name of your objects are using reserved keywords. A good example of this is the word ORDER, which may be useful in describing a table of customer orders — all fine, except that this is also a reserved keyword for sorting data. Therefore, the use of square brackets in this scenario is an absolute requirement to ensure your table is referenced correctly as part of any query.\nSo, bearing all of this in mind, the solution is relatively straightforward - remove the square brackets from each data source and, optionally, any reference to the underlying table schema. For this example, therefore, [dbo].[MyTable] would need to be changed to just MyTable. Upon fixing this, the Publish All button should start working as expected.\nWithout a shadow of a doubt, I was fortunate with this particular error. Without being familiar with the underlying mechanics of JSON, I imagine this could have descended into an agonising period of frustration and headbanging to resolve. Fortunately though, thanks to this and the rather obvious clue that the error message provides, the problem was relatively quick to resolve. This meant I could move on quickly to getting my stuff deployed out, instead of languishing in a repo and being of no use to anyone. 🙂\n","date":"2019-10-06T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/resolving-unable-to-parse-language-expression-error-in-azure-data-factory-v2/","title":"Resolving 'Unable to parse language expression' Error in Azure Data Factory V2"},{"content":"Microsoft Azure has many different features that are continually changing and are almost impossible to memorise in detail. Particularly when it comes to management and administration of your Azure estate, Microsoft provides numerous, handy options that can often avoid the need to raise a support request. One of these options is the ability to straightforwardly migrate Azure Subscriptions to an entirely different Azure Active Directory (AAD) tenant. There are numerous scenarios where this may become necessary:\nAs a consequence of a merger/acquisition, you need to look at consolidating Azure subscriptions into a single AAD tenant. After developing a solution for a potential customer, you need to then distribute this to them, as is, so that they can inherit billing and hosting responsibilities. As part of rearchitecting your AAD estate, you wish to move specific resources into dedicated tenants for development, test and production environments. In this context, Microsoft class this as a \u0026ldquo;billing ownership transfer\u0026rdquo; and the steps involved, for Pay-As-You-Go (PAYG), Visual Studio, MPN and Enterprise Agreement subscriptions are discussed in length on the Microsoft Docs website. In most cases, you can migrate resources straightforwardly. But you must be aware that specific resource configurations for App Service, Virtual Machine and Virtual Network resources may be restricted. You can consult the migration checklist to find out what is and isn\u0026rsquo;t supported. For anything that isn\u0026rsquo;t supported, the only viable migration pathway is to recreate the resources from scratch.\nA question from my side - and one which I was grappling with recently - is how you go about this migration if you are working with Cloud Solutions Provider (CSP) subscriptions. For the uninitiated, CSP is a licensing model designed for Microsoft Partners wanting to resell Office 365 and Azure consumption to their end customers directly. As part of this, the partner takes ownership over billing and management responsibilities, instead of Microsoft directly. The key benefit of this model is in the cost efficiencies it can introduce for customers and the ability for partner\u0026rsquo;s to integrate licensing alongside any professional or managed services offering. Whereas in the past, Office 365 licenses were the only thing on offer as part of this, partners can now provision and bill customers for all of their Azure consumption as well - very helpful! Unfortunately, there does not appear to be a clear and easy way to lift and shift resources in these subscriptions, in the same manner, referenced earlier - not so lovely. ☹\nSo how can you migrate Azure CSP subscriptions to another Azure Active Directory tenant? After some grappling and discussions with Microsoft, the only way in which this appears to be possible by doing the following:\nWithin the tenant that contains your CSP subscription, provision a new PAYG or trial subscription. For the tenant that you are migrating all resources into, provision a new CSP subscription. For all resource groups within the CSP subscription, move all resources into the new PAYG/trial subscription. As part of this, review the migration checklist mentioned above and also consider validating the move operation first before kicking it off. Once all resources are in the new subscription, move the subscription into the new tenant by following the instructions in this article. Repeat the steps outlined in 3, but this time, to move the PAYG subscriptions resources into the new CSP subscription. After verifying the migration has completed successfully, cancel the PAYG/trial subscription and any CSP subscriptions on the old tenant. All these steps may take several hours to complete.\nThings to watch out for Similar to how the transfer types mentioned earlier operate, the great news about all of this is it will lead to no immediate downtime for your resources. For example, Virtual Machines will remain accessible, and as too will any websites deployed to an App Service. However, the migration will not be completely smooth, so be sure to be aware and keep an eye out for the following:\nYou\u0026rsquo;ll need to make sure you have global administrator privileges on both tenants to ensure the temporary, PAYG subscription transfers successfully. Certain types of resources, such as Azure Key Vault and those that rely on role-based access control (RBAC) or managed identities, will almost certainly break during any migration. Review this in advance of any movement and, ideally, perform a test migration to verify any post-migration steps that may be necessary. During the transfer, the old subscription owner is added into the new tenant as a guest user. This account may require removal after any migration has completed successfully. If using a PAYG subscription, be aware that there will be some Azure billing that will occur against the credit card associated with this subscription. While the amounts may be tiny for smaller environments, you should at least estimate the amount of time the migration will take compared with the hourly, total billing for these resources. Provisioning a free trial, if at all possible, will naturally avoid the need to do this. If you\u0026rsquo;re using Azure DevOps to manage your Azure deployments, you\u0026rsquo;ll need to reconfigure your builds and pipelines to reference the new subscription location. They often say where there is a will, there is a way. 🙂 As this example hopefully shows, moving Azure CSP subscriptions is a wholly achievable task, but one that you will need to spend some time on to follow through to fruition.\n","date":"2019-09-29T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/moving-microsoft-azure-cloud-solution-provider-csp-subscriptions-to-another-tenant/","title":"Moving Microsoft Azure Cloud Solution Provider (CSP) Subscriptions to Another Tenant"},{"content":"Project Service Automation (PSA), an add-on solution sitting on top of Dynamics 365 Customer Engagement, is a great solution. If properly configured, it can deliver a lot of value for organisations frequently delivering projects, with a desire to integrate their sales cycle alongside this more tightly. A key challenge I find, though, concerning its adoption, is that a lot of PSA\u0026rsquo;s available functionality is not understood or documented very well. Although there is an entire Microsoft Docs website for PSA, it can be challenging joining the dots together. Also, some of the referenced functionality will refer to earlier versions of the product and, subsequently, deprecated features. This state of affairs can make it challenging determining what the \u0026ldquo;correct\u0026rdquo; steps are in achieving a particular task or whether it is safe to use a documented feature at all! Nevertheless, with this barrier overcome, I can see the solution delivering numerous benefits if implemented and utilised correctly.\nAn example of an area where it can be challenging to figure out the correct way forward is when working with mark-ups and discount values. Most organisations, from a sales perspective, will typically operate these on a percentage basis, derived from a fixed cost value, which may be subject to fluctuation. Fortunately, PSA provides the ability for organisations to apply discount/mark-ups in several ways. Firstly, particular role categories can have this defined. Extending this further, you can also set the same properties for specific resources and entire organisational units as well. However, achieving all of this is easier said than done. Several steps are required to get this working as intended within the application, to the extent to which some backend customisation will also be required. This post will aim to provide instructions on how to achieve all of this, from start to finish.\nFirst things first, you will need to follow the instructions outlined in this article to create a separate solution, which will act as a container for the customisation steps that follow. Whether you do this using the \u0026ldquo;classic\u0026rdquo; interface or via the new PowerApps portal is entirely up to you. All that matters is that you add on the following component to the solution, from the Role Price Markup entity:\nOnce added, navigate to the form in question. You will need to look at adding one or more fields, depending on your particular needs:\nIf you would like to specify mark-ups/discounts for specific resources, add on the Resource (msdyn_bookableresource) field. If instead, you wish to apply this for a role, the Role (msdyn_resourcecategory) field will need adding on instead. Finally, to work with mark-ups/discounts tied to a specific organisational unit, then the Resourcing Unit (msdyn_organizationalunit) field will be the one you need to select. For this example, we will look at working with Roles only and so, therefore, apply the following form customisations:\nNow we can look at making some configuration changes to PSA\u0026rsquo;s project parameter record. This modification is a requirement to ensure any information captured within the Role Price Markup entity is factored adequately into any calculations. Navigate first to the Markup Based Pricing Dimensions tab on your Project Parameter record and create a new record within the sub-grid. Then, populate it with the following values:\nDimension Name: msdyn_bookableresource This will indicate to PSA the entity to apply the mark-up pricing to. Type: Markup-based Applicable to Cost: Set this to either Yes or No, based on your intended usage scenario. Applicable to Sales: Set this to either Yes or No, based on your intended usage scenario. Cost Priority \u0026amp; Sales Priority: These fields will only be enabled if you selected Yes to any of the previous two options. Set the priority based on your preferred preference in which the mark-up evaluates against others defined within the application. The record should resemble the below if configured correctly:\nAn additional step will be required to avoid some potentially nasty errors further down the line - specifically, the following one when working with the Project entity:\nBecause of the myriad of relationships within PSA, and the differently named lookup fields involved, the application can not always figure out which field name to use when \u0026ldquo;grabbing\u0026rdquo; any mark-up percentages. In this example, because the lookup field on the Project Team Member entity to the Resource entity has a logical name of msdyn_bookableresourceid, PSA needs informing of this fact to get everything working as expected. Fortunately, it is pretty straightforward to get this configured. Press the Related tab on your Pricing Dimension record, select the Pricing Dimension Field Name entity and define this new record with the following properties:\nWith all necessary configuration steps completed, all that\u0026rsquo;s left is to set up all Role Price Markup records on each price list. We can then test to confirm everything is calculating correctly. For example, let\u0026rsquo;s assume we have a sell price list that defines a price for a Dynamics 365 Functional Consultant at £85 per hour. We want to look at applying a mark-up of 50% to this as part of any calculations. We would, therefore, navigate to the Role price markups tab on our Price List and define a new Role Price Markup record, like so:\nNow, whenever a Line Detail record involving this role calculates (using the Import from Project Estimates button), the relevant Unit Price values have the appropriate mark-up specified:\nAn additional, bonus feature with this functionality is the ability to specify discounts as well via the same mechanism. All you would do is define a minus percentage instead for this to take effect. So, by altering our original Role Price Markup record like so:\nWe can observe the following, reverse behaviour on the same Quote Line Detail record from earlier, after refreshing this from our Project record:\nI don\u0026rsquo;t know about you, but I always like little easter eggs like this. 😂 And with that, we can see how it is possible to start working with mark-ups and discounts successfully within PSA!\nConclusions or Wot I Think This type of functionality can be useful for several different business scenarios, particularly when it comes to building out your sales price lists. Rather than performing manual price calculations, you can instead duplicate an existing cost price list and then define any appropriate mark-ups within there. As someone who is utterly hopeless at maths, I breathe a sigh of relief at that thought. 🙂 Other potential usage scenarios for this feature may include:\nBeing able to define different mark-up/discount percentages for resources spread across different organisational units. At Project Contract stage, you can look to apply additional mark-ups/discounts, based on any changing circumstances during the project, such as a promotional offer or an adjustment based on inflation. By tying mark-up percentages to a specific resource, you can ensure this always remains adjustable, as the resource becomes more experienced or capable. For example, it may be desirable to charge more for a software developer with decades of experience and certifications, as opposed to a developer who has just completed their apprenticeship. Mark-up percentages provide a potential route for factoring this in, while still ensuring that your base unit price for each resource remains the same. As this example potentially illustrates, the key benefits of using an \u0026ldquo;off the shelf\u0026rdquo; solution like PSA is that you can quickly utilise a lot of functionality that would take ages to recreate. However, it is essential to remember that there will still be some work required to get your system working for your needs. This feature, in particular, is one that requires some time to understand fully and to configure correctly, with definitely some trial and error involved. Hopefully, after reading this post, you will have everything you need to go away and start playing about with it yourself.\n","date":"2019-09-22T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/working-with-mark-ups-discounts-in-project-service-automation-v3-x/","title":"Working with Mark-ups/Discounts in Project Service Automation (V3.x)"},{"content":"Being able to track when data is added, modified or removed from a SQL Server database is desirable for several potential business scenarios. Where things start to get tricky is in identifying a solution that runs efficiently and also only takes effect against data that has actually changed. Whereas a decade ago, you would have to configure a complex solution involving tables, functions and other sorts of wizardry, these days we are truly blessed. This state of affairs is thanks, in no small part, to the capabilities on offer as part of SQL Server change tracking. There\u0026rsquo;s a good chance if you are a seasoned SQL Server professional that you are aware of this feature. But if you are coming into the world of SQL Server for the first time, then the change tracking feature may tick a few boxes for you. It provides an easy to configure and scalable solution to detect which tables and columns have changed over time, allowing you then to kick off any appropriate business logic. In this week\u0026rsquo;s post, I\u0026rsquo;m going to provide an overview of how to set up change tracking functionality within an Azure SQL Database. From there, we\u0026rsquo;ll take a look at how you can then leverage this as part of an example scenario.\nSetting up your Database Change Tracking must be enabled at the database level; it is not a server-level property that is cascaded down to all databases. As part of this, you will need to consider the following:\nDo you want SQL Server to manage the removal of change tracking history automatically? How long do you want any changes tracked within the database? The answers to these questions will be dictated largely by the total amount of data in your database. Having change tracking enabled to retain data indefinitely could have adverse impacts on your storage costs over time. In most cases, I would suggest that a retention period of between 7 and 14 days, working on the basis that you are querying for changes at least once per day. This length will provide you with an appropriate buffer, should any problems occur. The following script will enable Change Tracking for the CT-Test database, with a 7-day retention period:\n--Enable Change Tracking, with auto-cleanup and retention period of 7 days. ALTER DATABASE [CT-Test] SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 7 DAYS, AUTO_CLEANUP = ON); GO You can then verify that this has taken effect successfully by running the following query and confirming you get a result back:\n--Verify that the database has been enabled for Change Tracking SELECT d.name, ctd.is_auto_cleanup_on, ctd.retention_period, ctd.retention_period_units_desc FROM sys.change_tracking_databases AS ctd INNER JOIN sys.databases AS d ON ctd.database_id = d.database_id; GO If you are working with a SQL Server database project, then you would instead add the following lines into your .sqlproj file, within the node:\n\u0026lt;IsChangeTrackingOn\u0026gt;True\u0026lt;/IsChangeTrackingOn\u0026gt; \u0026lt;ChangeTrackingRetentionPeriod\u0026gt;7\u0026lt;/ChangeTrackingRetentionPeriod\u0026gt; It\u0026rsquo;s also worth noting that you have several different frequency options when defining your retention periods, such as days (as indicated above), hours and minutes.\nConfiguring your Tables Enabling change tracking at a database level will not automatically apply this to each of your tables (thankfully!). You must instead enable these individually. For example, let\u0026rsquo;s assume we have the following simple table:\nCREATE TABLE [dbo].[MyTable] ( [ID] INT IDENTITY(1,1) PRIMARY KEY NOT NULL, [Name] NVARCHAR(50) NULL, [Birthday] DATETIME NULL, [FavouriteCake] NVARCHAR(50) ); A very concise ALTER TABLE statement will enable change tracking for this table:\nALTER TABLE [dbo].[MyTable] ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); GO An important thing to point out here is that your table must have a Primary Key defined for it. If not, then you will likely get the following error when creating your table:\nAn additional option you have available here is to track which columns have changed as part of an UPDATE operation. While this may introduce additional storage/performance overhead into your solution, it can be useful for particular auditing scenarios.\nQuerying Changed Data The CHANGETABLE() table function is a central component when working with change tracking. Through this, you can determine what data changed at a specific version of its history. SQL Server provides two additional functions, to help you investigate this further:\nCHANGE_TRACKING_CURRENT_VERSION(): This will return the database current change tracking version. CHANGE_TRACKING_MIN_VALID_VERSION(): This will return the earliest possible change version that is queryable, for the supplied table object. Availability is dictated by the retention and AUTO_CLEANUP settings defined earlier. Therefore, you can use the following query to check for changes to the MyTable object, based on supplying the current change tracking version:\nDECLARE @changeTrackingMinimumVersion INT = CHANGE_TRACKING_MIN_VALID_VERSION(OBJECT_ID(\u0026#39;dbo.MyTable\u0026#39;)); SELECT * FROM CHANGETABLE(CHANGES [dbo].[MyTable], @changeTrackingMinimumVersion) AS CT; When using the CHANGETABLE() function, you must ensure that it is aliased properly, as indicated above. Otherwise, you may start getting errors similar to the below:\nWhile this is all well and good, there is no way to persist the current change tracking version for a particular table. This fact is important, as you may have to rely on this value as a reference point as part of a synchronisation task. What\u0026rsquo;s more, some additional work is required to facilitate this process from start to finish. A potential solution to get around this involves the following steps:\nSetting up a table that records the last Change Tracking version for the table, based on when data was last synchronised. Building a stored procedure that will update the change tracking table with the latest version. Defining a query that will grab the latest changes and move the data into a staging table (for example purposes, this will exist in the same database) Writing a MERGE statement that will combine the changes from the staging table into the new table and insert/update/delete records accordingly. With a bit of tinkering, we can achieve all of the above by executing the following SQL statements\n--First, we setup our change tracking logging table --The constraints are optional, but can help to enforce data integrity --We also initialise the table based on the current change tracking version in the DB CREATE TABLE [dbo].[ChangeTrackingVersion] ( [TableName] VARCHAR(255) NOT NULL, CONSTRAINT CHK_TableName CHECK ([TableName] IN (\u0026#39;MyTable\u0026#39;)), CONSTRAINT UC_TableName UNIQUE ([TableName]), [SYS_CHANGE_VERSION] BIGINT NOT NULL ); INSERT INTO [dbo].[ChangeTrackingVersion] VALUES (\u0026#39;MyTable\u0026#39;, CHANGE_TRACKING_CURRENT_VERSION()); GO --Next, we create our staging tables and table where data will be merged into --In this case, we want to aggregate cake preferences based on birthday and remove any personally identifiable information --These would typically sit in a seperate database and be moved across via SSIS or Azure Data Factory CREATE TABLE [dbo].[Staging_MyTable] ( [ID] INT NOT NULL, [Birthday] DATETIME NULL, [FavouriteCake] NVARCHAR(50), [SYS_CHANGE_OPERATION] NCHAR(1) NOT NULL, CONSTRAINT [CHK_MyTable] CHECK ([SYS_CHANGE_OPERATION] IN (\u0026#39;U\u0026#39;,\u0026#39;I\u0026#39;,\u0026#39;D\u0026#39;)) ); CREATE TABLE [dbo].[ReportingMyTable] ( [ID] INT NOT NULL, [Birthday] DATETIME NULL, [FavouriteCake] NVARCHAR(50) ); GO --Finally, we create the stored procedure that will be called as part of the query/merge operation CREATE PROCEDURE [dbo].[uspUpdateChangeTrackingVersion] @CurrentTrackingVersion BIGINT, @TableName varchar(50) AS BEGIN UPDATE [dbo].[ChangeTrackingVersion] SET [SYS_CHANGE_VERSION] = @CurrentTrackingVersion WHERE [TableName] = @TableName END; GO --With all objects created, we can now run the following query to start synchronising data. --First, run a one-off query to get all current records moved across. INSERT INTO dbo.ReportingMyTable SELECT ID, Birthday, FavouriteCake FROM dbo.MyTable; --Verify table results SELECT * FROM dbo.ReportingMyTable; --Then, make some additional data changes UPDATE dbo.MyTable SET [Birthday] = \u0026#39;1989-10-1\u0026#39; WHERE ID = 2; GO INSERT INTO dbo.MyTable VALUES\t(\u0026#39;Mary\u0026#39;, \u0026#39;1991-10-11\u0026#39;, \u0026#39;Banana\u0026#39;), (\u0026#39;Jude\u0026#39;, \u0026#39;1978-09-25\u0026#39;, \u0026#39;Pannacotta\u0026#39;); GO DELETE FROM MyTable WHERE ID = 3; GO --Now, we can run the synchronisation scripts. --First, import all data into the staging table DECLARE @lastChangeTrackingVersion BIGINT = (SELECT TOP 1 SYS_CHANGE_VERSION FROM [dbo].[ChangeTrackingVersion]), @currentChangeTrackingVersion BIGINT = (SELECT CHANGE_TRACKING_CURRENT_VERSION()); INSERT INTO [dbo].[Staging_MyTable] SELECT CT.ID, ISNULL(MT.Birthday, \u0026#39;\u0026#39;) AS Birthday, ISNULL(MT.FavouriteCake, \u0026#39;\u0026#39;) AS FavouriteCake, CT.SYS_CHANGE_OPERATION FROM [dbo].[MyTable] AS MT RIGHT JOIN CHANGETABLE(CHANGES [dbo].[MyTable], @lastChangeTrackingVersion) AS CT ON MT.ID = CT.ID WHERE CT.SYS_CHANGE_VERSION \u0026lt;= @currentChangeTrackingVersion; --Then, run a merge script, with logic in place to handle each potential record operation MERGE [dbo].[ReportingMyTable] AS target USING [dbo].[Staging_MyTable] AS source ON target.[ID] = source.[ID] --If change was an INSERT, add it to the database. WHEN NOT MATCHED BY TARGET AND source.[SYS_CHANGE_OPERATION] = \u0026#39;I\u0026#39; THEN INSERT ([ID], [Birthday], [FavouriteCake]) VALUES (source.[ID], source.[Birthday], source.[FavouriteCake]) --If change was an UPDATE, update existing record. WHEN MATCHED AND source.[SYS_CHANGE_OPERATION] = \u0026#39;U\u0026#39; THEN UPDATE SET target.[Birthday] = source.[Birthday], target.[FavouriteCake] = source.[FavouriteCake] --If change was a DELETE, then delete the record in target WHEN MATCHED AND source.[SYS_CHANGE_OPERATION] = \u0026#39;D\u0026#39; THEN DELETE; GO --Finally, we update the change tracking table to record the fact that we have grabbed the latest changes DECLARE @currentChangeTrackingVersion BIGINT = CHANGE_TRACKING_CURRENT_VERSION(); EXEC [dbo].[uspUpdateChangeTrackingVersion] @currentChangeTrackingVersion, \u0026#39;MyTable\u0026#39; --Verify the results now - should be 3 results and ID 2 should have a birthday of \u0026#39;1989-10-1\u0026#39; SELECT * FROM ReportingMyTable With this, you can start to use change tracking to manage your end-to-end synchronisation processes between multiple databases.\nWrapping Up Change tracking is an incredibly useful tool to have within your arsenal, particularly in the context of working with SQL Server Integration Services (SSIS) or Azure Data Factory integration tasks. When working with enormous datasets, you can build in logic to only interact with the data you need, reducing the amount of time it takes to complete these tasks. Also, if you are also running these workloads in the cloud, there is a vast cost-reduction potential as well. And, as I hope, this post demonstrated just how easy it is to get started with using change tracking. You can download the entire, end-to-end SQL script file for everything covered in this post from my GitHub page. Let me know if you have any queries in the comments below! 🙂\n","date":"2019-09-15T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/getting-started-with-sql-server-change-tracking/","title":"Getting Started with SQL Server Change Tracking"},{"content":"It\u0026rsquo;s a busy time for licensing changes in the Dynamics 365 Customer Engagement (D365CE) space presently, as my post last week on the new API limits touches upon. This week, I heard that effective from Tuesday 1st October, the following D365CE related plans will be no longer available for purchase via the Cloud Solutions Provider (CSP) scheme:\nDynamics 365 Enterprise Edition Customer Engagement Plan - Tier 1 (99 Users) Dynamics 365 Enterprise Edition Customer Engagement Plan for CRMOL Professional (Qualified Offer) Dynamics 365 for Sales, Enterprise Edition for CRMOL Professional (Qualified Offer) CSP, for those unaware, is when you transact all of your licensing, support and billing through a Microsoft Partner, as opposed to dealing with Microsoft directly. Adopting this licensing model can typically introduce several benefits, including more cost-efficient licensing and the ability for partner organisations to offer more services to customers. If you are still buying your licenses directly from Microsoft, then I would urge you to make the jump across to CSP at the earliest opportunity.\nGoing back to the cancellations now. Considering that CRM Online has been dead, as a marketing construct at least, for three years now, it makes sense that these last two plans are retired. I highly doubt there is still an organisation with the old-style CRM Online licenses via CSP on their Office 365 tenant, thereby making qualification for these impossible to achieve. Where things get more interesting is when we consider the 99 users plan\u0026hellip;\nCould this be a means of (temporarily) avoiding increased licensing costs and API limits? Currently, the Customer Engagement plan comes in at around £80-£90 per user per month. For this, you get quite a lot:\nAccess to all base applications (Sales, Service etc.) Microsoft Flow PowerApps Forms Pro SharePoint Online (Plan 2) Microsoft Project (both online and the desktop app) This smorgasbord may be overkill for specific organisations and does represent a pretty high cost of ownership, particularly if you only require access to the Sales app. In which case, you would typically purchase this at a much lower price, usually around the £70 mark. Now, with licensing shifting to a base + attach model (e.g. purchase Sales only upfront, and include Service later on for a reduced \u0026ldquo;add-on\u0026rdquo; cost), costs here could soar. Based on my calculations, you could be looking at paying around £130+ per user to access all base applications. These figures derive from the following post from QBS, which also provides an excellent summary of the upcoming changes as part of the October release. Therefore, could it be cheaper at this juncture to go for the above Customer Engagement plan as a new or existing customer?\nRelated to this is the whole furore over the API limits. Some interesting discussion and clarification points have arisen during this week on this very subject. In particular, there have been some informative posts from MVP\u0026rsquo;s Scott Durow and Gustaf Westerlund, providing some additional answers. The following points raised by this are of interest in the context of these CSP licensing changes:\nNo new licensing information has been announced and is available as part of CSP price lists at the time of writing this post. It\u0026rsquo;s, therefore, difficult to compare apples with apples and see what the new state of play will be from October 1st, pricing wise. The Dynamics 365 Enterprise plan (i.e. the Customer Engagement Plan) has an API limit of 20,000 requests per month. The \u0026ldquo;add-on\u0026rdquo; capacity for API requests comes in at around $50 for 10,000 requests every 24 hours. Therefore, \u0026ldquo;locking in\u0026rdquo; the Customer Engagement Plan offer before the 1st October could lead you to have more bang for your buck when it comes to features and API limits. Ultimately, while these benefits do sound amazing, keep in mind that you will be unable to use this license beyond any renewal date for the plan. Typically, this will be one year for CSP plans, making all of these benefits null and void beyond October 2020. However, it does at least present an opportunity for smaller organisations to save money and get more API requests, before having to have any problematic conversations in 2020 about price increases. For organisations who work on year-to-year budgets for licensing, this may be a concern that is not necessary to consider in-depth at this stage.\nNow, ultimately, I would caveat this entire post by saying I am in no way a licensing expert. Thus, I would very much urge you to forward any queries to your internal/external specialist in this area. And you need to be under the threshold of 99 users within your organisation to ensure you meet the terms of this offer. Also, I would question how you could \u0026ldquo;sell\u0026rdquo; this as a solution in good faith, particularly when it could lead to higher costs in the long run. All you are doing is buying time, to either save money or forestall any refactoring of a solution consuming too many API requests. What I will say is that, if you\u0026rsquo;re currently using these licenses, start clarifying their renewal date(s) and consider your future options. Just don\u0026rsquo;t be surprised if your CSP provider can not give you a straight answer on the second point for now. I am unable to see any cast-iron information on my side (as an indirect CSP provider by day) to illustrate potential costings and migration pathways. So be sure to cut your CSP partner some slack if they don\u0026rsquo;t come back to you straight away 🙂\n","date":"2019-09-08T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/september-2019-dynamics-365-customer-engagement-csp-licensing-changes/","title":"September 2019 Dynamics 365 Customer Engagement CSP Licensing Changes"},{"content":"As we began to settle into 2019, we finally saw a cohesive and logical strategy emerge concerning Microsoft Dynamics 365 Customer Engagement (formerly known as Dynamics CRM). The announcement of the Microsoft Power Platform helped in setting a positive direction of travel for the various so-called \u0026ldquo;business application\u0026rdquo; products - namely, the Common Data Service, PowerApps and Microsoft Flow. Customer Engagement still plays a central role as part of all this, as the in-effect underlying database of the entire platform. As part of this necessary re-alignment, we have seen several important and potentially disruptive announcements come out:\nPreviously, customers would pay based on the total database storage consumed. This has now been segregated out into pricing for database, file and log capacity, which are metered separately. Licensing has seen further refinements, with new license types to address individual usage cases and a new, metered licensing for Portals, now rebranded as PowerApps Portals. The overall customisation experience for Customer Engagement is slowly but surely moving across into PowerApps. If you are not already using this new portal to manage your Customer Engagement deployments, then you need to start thinking about moving across today. Potentially a lot to take in! This is why keeping abreast of what is happening is so important, either through following the latest community blogs or by attending some of the great events happening out there today. By doing this, you can see and hear about what is happening, often from Microsoft themselves.\nNow, as August comes to a close, we see another important announcement relating to API request limits come out of the woodworks. If this is not correctly read and understood by ISV\u0026rsquo;s and developers, then I can imagine a whole heap of problems occurring. Let\u0026rsquo;s digest the salient points from the announcement to see just how big of an impact this will be:\nThe devil is in the detail As is fairly typical of announcements like this, more questions are raised than answered. A couple from my side include:\nWhat is the pricing for the \u0026ldquo;add-on capacity\u0026rdquo; component mentioned? At the moment, this does not appear to be an available SKU on Office 365. Is there a set date on which the limit will be enforced? What tools (if any) do Microsoft recommend/advise organisations to use to monitor the number of API requests across different user accounts? No doubt answers will be forthcoming, but the relatively short window relating to these changes could be tricky and cause organisations some problems.\nPlug-in Operations are included I\u0026rsquo;ve been having a lot of conversations regarding plug-ins recently and how, potentially, a complete re-think may be required in how they are architectured and deployed. This announcement would appear to throw this into tighter focus, as Microsoft are classing any plug-in message operation as an API call. This could, potentially, be the element that sucks up the most requests, particularly for any ISV solution or a highly customised Customer Engagement deployment. I\u0026rsquo;d also question just how this may work in practice as well, given that PSA, a Microsoft solution, utilises plug-ins across almost all aspects of the application. I would assume these to be subject to the same API request limits. In essence, plug-in developers will need to start analysing their code and introduce any efficiencies, where required. Alternatively, hiving all of this off to Azure via the Azure Service Bus could be the way forward, potentially.\nNon-Licensed Users Longstanding CRM developers will no doubt be familiar with the process of setting up a non-interactive user and the process this entails. These user types do not require a license, thereby allowing you to create dedicated service accounts to perform any programmatic operation straightforwardly. Microsoft limits you to up to seven of these account types, thus preventing any abuse. But other than that, there were no significant limitations with their usage. This is no longer the case as part of this announcement. Now, you will need to ensure these user types are assigned a relevant capacity add-on, that then dictates the number of requests they can process. Failing to do this will result in them receiving a grand total of zero available requests and, no-doubt, extensive errors in your particular application if utilised.\nIn principle, I am supportive of these changes, difficult though they may be to stomach\u0026hellip; Seriously, no joke here. Let\u0026rsquo;s consider an example, that bears in mind that everyone with a Customer Engagement instance is, in effect, sharing the same \u0026ldquo;space\u0026rdquo;. Why is it fair that an instance with zero API calls has to pay the same as one making thousands per hour and, consequently, causes performance issues for others? Also, is there a reason why this instance is making so many API calls in the first place? I think a clear subtext from this announcement is that a lot of people have, quite frankly, been taking the piss for far too long. Whether this be through poorly developed applications that do not adhere to best practice approaches, there are evidently a lot of API calls occurring against the backend platform. Many of these, I would warrant, are either excessive or completely unnecessary. While it is a shame that things have come to this juncture, I certainly see the logic in going down this approach. Sitting from Microsoft\u0026rsquo;s perspective, I, as a software provider, want to ensure that all of our customers are not impacted adversely by the actions of one organisation/individual. We should keep in mind also that API limits are already in place for Customer Engagement - previously set at 4000 requests, per organization instance, during a 5-minute sliding window. Ultimately, I think the writing has been on the wall for a while, meaning any anger should not necessarily derive from its surprise.\n\u0026hellip;but this is a hell of a short window for everyone to get their houses into order. As much as I may agree with the direction of travel, that\u0026rsquo;s not to say I approve of how fast we are going. 🙂 Expecting ISV\u0026rsquo;s and end-users to get everything in place to ensure no disruption within 2 months is very optimistic. The amount of potential evaluation, refactoring, testing and pleading customer phone calls that may result from this is more akin to a six to twelve-month project. And this, of course, assumes ISV\u0026rsquo;s or developers are being proactive in communicating with customers or their businesses. I can fully anticipate a scenario of panicked frenzy in early October, as business systems start erroring and there is no-one immediately available to remedy the issue. Ultimately, a lot of this will come down to whether the API limits will be strictly enforced from October 1st onwards, or whether a \u0026ldquo;bedding in\u0026rdquo; period will exist. I sincerely hope Microsoft go for the latter.\nNeed some help in understanding how these changes impact you? Attempting to digest and evaluate how these changes may directly impact your organisation could be tricky. If you need any help with this, feel free to reach out to me directly, and I\u0026rsquo;d be happy to discuss them further with you. You can also leave a comment below - I\u0026rsquo;d be interested in hearing others thoughts in regards to these changes.\n","date":"2019-09-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/summarising-the-new-dynamics-365-customer-engagement-api-request-limits-allocations/","title":"Summarising the New Dynamics 365 Customer Engagement API Request Limits \u0026 Allocations"},{"content":"Error messages can come in all shapes and sizes. And, as some of my more recent blog posts have hinted towards, often the best way of overcoming these is to read any presented error message correctly. Attempting to conjure up some grandiose and far-reaching conclusion can often lead to fruitless hours down the rabbit hole, when ultimately the tried and tested IT resolution methods will, more often than not, cause all of your problems to disappear.\nBearing all of this in mind, I was recently presented with the following error message during a data-tier application package (DACPAC) to an Azure SQL database, on an Azure DevOps release pipeline:\nDevOps automation will almost, always take out any headaches when managing frequent deployments. Notwithstanding this simple fact, it still does not mean the end of your problems, as the appearance of this error message very clearly demonstrates. To compound things further, as we have no detailed error information to work with, merely reading the precise text over and over again would not lead to a speedy resolution.\nIn an attempt to resolve the error, I booted up Visual Studio and tried deploying out the DACPAC manually, using the same service account defined as part of the Azure DevOps pipeline. What, at the time, felt like a vain attempt at resolution quickly bore fruit. It turned out that the service account, an Azure Active Directory (AAD) Administrator account on the Azure SQL database, had an expired password. This circumstance was, therefore, causing any login attempt to fail and, consequently, the error displayed above. Resetting the password, updating the pipeline connection credentials and rerunning the deployment, naturally, caused everything to start working again successfully.\nI\u0026rsquo;m unsure whether this generic error message appears for other issues that could potentially occur as part of a deployment. But, if you stumble upon this message yourself, hopefully, this post will provide a steer towards at least one thing that you can double-check before any hair pulling occurs! 🙂\n","date":"2019-08-25T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/dealing-with-an-unexpected-failure-occurred-error-message-in-azure-devops-dacpac-deployments/","title":"Dealing with \"An unexpected failure occurred\" Error Message in Azure DevOps DACPAC Deployments"},{"content":"The number of business scenarios I see these days involving Dynamics 365 Customer Engagement (D365CE) and Power BI is snowballing. It is pretty easy to understand why. Power BI is an incredibly engaging and intuitive reporting tool that, in some ways, surpasses the options available to us natively within D365CE. However, getting started can be easier said than done, thanks in part to the multiple avenues possible to retrieve data from the application. At the time of writing this post, these primarily include four options:\nThe Common Data Service Connector The Dynamics 365 (online) Connector The Web Connector, i.e., directly querying the applications Web API endpoint Using the Power Query (M) Builder community tool Each option has its own distinct set of advantages and disadvantages. In this week\u0026rsquo;s blog post, I wanted to evaluate each one in a little more detail, to hopefully assist you in determining the most suitable one for your particular situation.\nCommon Data Service Connector The Common Data Service Connector is the newest option available to us, having only just recently coming out of public preview. Insofar as your potential reporting requirements are relatively simple, this connector is most certainly the \u0026ldquo;beginners choice\u0026rdquo; for bringing data in from D365CE. This is because it supports the latest version of the applications Web API and has several useful options to help with modelling any returned data. The connector can be accessed via the Get Data option on Power BI Desktop, as illustrated below:\nOnce selected and, as indicated below, you must then specify the following options:\nServer URL: The full server URL for your Dynamics 365 Customer Engagement instance. For tenants located within the United Kingdom, this will typically be in the format https://.crm11.dynamics.com. Reorder columns: If set to true, the connector will return all entity data columns in alphabetical order. Add display column: If set to true, Power BI will include an additional column for specific data types to assist with readability. For example, Option Set fields will return the Option Set Display Label. I recommended that the Reorder columns and Add display column options are always set to true. By pressing OK and, after logging in with an Organizational Account with sufficient privileges to access the instance, you will see a list of data (entities) that can be selected:\nThe Entities folder will display a list of all distinct entities from Dynamics 365 Customer Engagement, formatted using the Entity logical name. Any selected entity data will then load into the Power Query model after pressing the Load button. You can also click the Edit button to automatically load the Power Query Editor, thereby allowing you to carry out additional transformations to your data.\nUse the Common Data Service Connector when: You have been asked to set up a new report using Power BI. No previous work has been carried out in building out D365CE system views, reports, etc. that contain snapshots of the data that you need. You have minimal experience using the Power Query Editor to shape data. Dynamics 365 (online) Connector Before the previous connector was generally released, the Dynamics 365 (online) Connector was the only available D365CE connector for Power BI Desktop. It is accessed in much the same way, from within the Online Sources tab on the Get Data dialog window:\nOnce selected, the options available for selection are spread across the Basic and Advanced radio buttons:\nWithin the Basic tab, you must specify a single option – the full Web API URL from the application. This value can be obtained by navigating to the Developer Resources area within Dynamics 365 Customer Engagement and locating the Instance Web API URL. For instances located in the United Kingdom, this will typically be in the format of https://.crm11.dynamics.com/api/data/v9.1/. You also can specify different versions of the API to use, ranging from the following options: v8.0 v8.1 v8.2 v9.0 The Advanced tab allows you to specify additional query parts URL parts as part of the given Web API URL. For example, the screenshot below shows an example of how to use the URL parts to return data from the Accounts entity only: If the Basic options are utilised, then data is returned in the same manner as the Common Data Service Connector. This then allows you to select and preview data from multiple entities before loading it into your model.\nUse the Dynamics 365 (online) Connector when: Actually, unless you are already using this connector as part of an existing report, I really would not recommend you use this at all. Because the connector excludes options to sort columns by alphabetic order or to return option set labels, working with any returned data becomes that much more difficult. What\u0026rsquo;s more, now that the Common Data Service Connector has been released, I suspect that the Dynamics 365 (online) Connector will eventually go the way of the Dodo.\nWeb The Web connector utilises the Web API OData Feed, similar to the Dynamics 365 (online) Connector, but with scope to fully leverage custom Web API OData queries. For example, let\u0026rsquo;s assume you have the following Web API URL to return filtered Contact entity data:\nhttps://.crm11.dynamics.com/api/data/v9.1/contacts?$select=emailaddress1,fullname\u0026amp;$filter=contains(firstname, \u0026lsquo;Joe\u0026rsquo;)\u0026amp;$orderby=fullname asc\nThis URL can then be entered into the From Web connector dialog box to return the results of that specific query:\nData will then load into the Power Query Editor as a JSON object. You must then click the List hyperlink to expand out and return a list of all records within a tabular format:\nTo understand the types of things that you can do with the Web API, I recommend that you take a look through the Microsoft Docs article that is dedicated to this subject.\nUse the Web Connector when: You have previously authored OData or FetchXML queries and wish to re-use them within Power BI Desktop. You are comfortable working with web API\u0026rsquo;s and the Power Query Editor. Similar to the Dynamics 365 (online) Connector, you should anticipate some work to ensure that, for example, option set values are displayed correctly. You require granular control over the various D365CE Web API options. Power Query (M) Builder The Power Query (M) Builder application is a community plugin, available as part of the freely distributed XrmToolBox. The XrmToolBox brings together several useful tools for Dynamics 365 Customer Engagement administrators, developers, and customisers. Once you have downloaded the XrmToolBox, the Power Query (M) Builder application can be installed by navigating to the Plugins Store within the application:\nOnce installed, you can then use the tool to:\nSelect the entity data that you wish to query from Power BI. This is achieved by defining the fields that you want to return, based on an existing Entity view, or by specifying the list of fields to return from directly within the tool. Bring in any existing FetchXML queries and convert them into Power Query M code. Generate M queries for returning entity data and any related Option Set information. The plugin has an intuitive interface that allows you to build your queries by selecting the entity/fields you would like to work with. In the example below, the Accounts I Follow view has been chosen to generate an M query code snippet:\nCode generated within the tool via the GenerateOData and GenerateOptionSets buttons can then be straightforwardly copied across into Power BI by using the Blank Query data source and using the Advanced Editor option to paste in any relevant code:\nUse the Power Query (M) Builder when: You are already using the XrmToolBox daily and are familiar with its core components. You have existing custom views or FetchXML queries that you want to re-use within Power BI Desktop, and you are looking for an easy way to migrate these across. You have a basic awareness of using the Power Query Editor and are looking for a solution that involves you writing a minimal amount of code. I hope this post has been useful in demonstrating the options available to work with your D365CE data from within Power BI Desktop. Please leave a comment below if you have any questions\n","date":"2019-08-18T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/evaluating-power-bi-desktop-connector-options-for-dynamics-365-customer-engagement/","title":"Evaluating Power BI Desktop Connector Options for Dynamics 365 Customer Engagement"},{"content":"The problem sometimes, when developing a bespoke solution, is that it is impossible to anticipate how it may evolve. Whether it becomes dependent on changed functionality or new systems introduced into the equation, this eventuality is not necessarily down to a scoping or requirements gathering error. Organisations can, and will, drive off into all sorts of unexpected directions over time and, like it or lump it, IT systems are generally the last to follow behind any change. In this week\u0026rsquo;s post, I want to focus on a recent example I was involved in related to this theme, involving Dynamics 365 Customer Engagement (D365CE) and the Project Service Automation (PSA) add-on.\nSetting the Scene: What is a Custom Pricing Plug-in? Those familiar the D365CE will no doubt be able to explain how the application automatically totals up the various product line items for core, sales-related entities - Opportunity, Quote etc. Typically, this behaviour will be suitable for most scenarios, but there may be situations where fine-tuning is required to factor in additional calculations or perform a bespoke integration. For example, there may be a requirement to query an external ERP system to ensure that the Cost Price value for a line item record is correct. And, while it is undoubtedly possible to customise the application around how the default pricing calculations work, solutions of this nature can typically become unwieldy to maintain over time. Besides, you always remain at the mercy of how the application decides to perform sales entity calculations, which may be subject to change between major version releases of D365CE.\nFortunately, one of (I believe) the best-kept secrets concerning this functionality is that a) it can be disabled entirely and b) replaced with new pricing logic of your choosing. The CalculatePrice message is the C#/VB.NET developer\u0026rsquo;s gateway towards achieving this and, by developing a bespoke plug-in, it is possible to tailor all sales calculations completely. While implementing a solution of this nature does, invariably, introduce a degree of complexity into your D365CE deployment, it does have several benefits:\nAvoids a messy/\u0026ldquo;hacky\u0026rdquo; solution from being implemented using Business Rules, Flows or any other customisation feature within the application. Provides the capability to apply the same calculations to all sales entities quickly or implement bespoke logic to calculate Quotes differently from Opportunity records, for example. Allows you to achieve more complex integration requirements, such as those involving separate application systems. I am a major proponent of implementing custom pricing solutions and have discussed how to get started with custom pricing plug-ins previously on the blog. However, I do always heavily caution their use, and it is a \u0026ldquo;last resort\u0026rdquo; if you cannot work with D365CE\u0026rsquo;s default pricing functionality. The old maxim stands true: just because you CAN do something, doesn\u0026rsquo;t necessarily mean you should 🙂\nPSA + Custom Pricing Plugin = ? Going back now to the example I alluded to at the start of this post\u0026hellip;I was contacted by a customer who was having issues with a custom pricing solution I had implemented for them a while back. Since then, the organisation had implemented PSA and had started to get very nasty error messages appearing when working with Project-based Line records, as indicated below:\nAs the error message demonstrates, the culprit was the custom pricing plug-in, which was preventing the creation of any associated Line Detail record. Evaluating the tracing logs for this plug-in showed that everything worked fine and dandy, up until the point when any actual calculation values needed applying to the Quote Line record. This step would fail with the following error message:\nCalculatePrice: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]: The total line amount and estimated tax of all Sales Document Line Details must equal the line amount and estimated tax of the Sales Document Line. (Fault Detail is equal to Exception details:\nFurther investigation of the error logs pointed towards the PSA managed solution plug-in Microsoft.Dynamics.ProjectService.Plugins.PreValidatequotedetailUpdate being the culprit for the raising of the OrganisationServiceFault error in the first place.\nUnderstanding How PSA Works At this stage, I had to get to grips with how this nice-looking PSA application worked under the hood, having previously only a surface-level awareness of its behaviour. PSA is designed to work very closely alongside the existing Sales functionality within D365CE. This approach provides not only the ability to manage/plan projects at a resource-level but ensures the appropriate factoring of any delivery costs into generated sales records. For \u0026ldquo;time \u0026amp; material\u0026rdquo; projects, this is crucial, as providing sufficient accounting for a resources time and financial return is imperative. For the most part, PSA relies on the tried and tested Sales module functionality to deliver most of these requirements, but also provides the following, additional Line Detail entities, linked to the corresponding sales/product line entity:\nOpportunity Line Detail Quote Line Detail Sales Contract (i.e. Order) Line Detail Invoice Line Detail The purpose of these entities (except for the Opportunity Line Detail, which records a \u0026ldquo;finger in the air\u0026rdquo; budget value instead) is to allow multiple resources to be associated with a Quote Line record. For example, let\u0026rsquo;s assume you are charging a customer for Project Scoping. As part of this, the following resources need to be assigned:\n4 hours for a Solution Architect 6 hours for a Pre-Sales Consultant 6 hours for a Project Manager In this scenario, all three of these roles would be created as Line Detail records against a Project Scoping Line record. Although this process does involve a few additional steps compared to the default Sales module, this functionality is also useful as it allows you to:\nFactor in whether a resource is chargeable as part of an activity or not. Allows you to categorise the type of cost that the Line Detail is. Indicate the anticipated length of time that the resource will require involvement as part of an engagement. So with a combination of understanding how the application works and a thorough diagnosis of the error message in question, the answer to the problem seems relatively apparent. Any PSA custom pricing plug-in must factor the total, chargeable value of all related Line Detail\u0026rsquo;s, ensuring the Amount and Estimated Tax fields update accordingly.\nThe Techie Bit: Evaluating a Suggested Approach Now that we know the problem, we can start to think about a potential resolution. The C# class below shows a potential approach for calculating PSA Product Line details records. Pass any Line record through, and the pesky error message shown above should no longer appear, and all expected calculation will be applied:\nprivate static void CalculatePSAProduct(Entity e1, IOrganizationService service, ITracingService tracingService) { //Determine which Line Detail records to retrieve from - throw an error if an unexpected record has been passed through. //Also need to determine the lookup field name as part of this operation - either msdyn_quotelineid or msdyn_salescontractlineid string detailName; string detailRelatedEntityName; switch (e1.LogicalName) { case \u0026#34;quotedetail\u0026#34;: detailName = \u0026#34;msdyn_quotelinetransaction\u0026#34;; detailRelatedEntityName = \u0026#34;msdyn_quotelineid\u0026#34;; tracingService.Trace(\u0026#34;Record being calculated = Quote Line Detail\u0026#34;); break; case \u0026#34;salesorderdetail\u0026#34;: detailName = \u0026#34;msdyn_orderlinetransaction\u0026#34;; detailRelatedEntityName = \u0026#34;msdyn_salescontractlineid\u0026#34;; tracingService.Trace(\u0026#34;Record being calculated = Order Line Detail\u0026#34;); break; default: throw new InvalidPluginExecutionException(\u0026#34;An error occurred when applying custom pricing. An incorrect entity name was passed to the CalculatePSAProduct method\u0026#34;); } //Retrieve all related, chargeable Line Detail records that have a Transaction Type value of \u0026#34;Project Contract\u0026#34; with the Amount, Tax and amount_after_tax field values tracingService.Trace(\u0026#34;Retrieving all related, chargeable \u0026#34; + detailName + \u0026#34; records...\u0026#34;); QueryByAttribute qba = new QueryByAttribute(detailName); qba.ColumnSet = new ColumnSet(\u0026#34;msdyn_amount\u0026#34;, \u0026#34;msdyn_tax\u0026#34;, \u0026#34;msdyn_amount_after_tax\u0026#34;); qba.Attributes.AddRange(\u0026#34;msdyn_billingtype\u0026#34;); qba.Values.AddRange(192350001); qba.Attributes.AddRange(\u0026#34;msdyn_transactiontypecode\u0026#34;); qba.Values.AddRange(192350004); qba.Attributes.AddRange(detailRelatedEntityName); qba.Values.AddRange(e1.Id); EntityCollection lt = service.RetrieveMultiple(qba); tracingService.Trace(\u0026#34;Got \u0026#34; + lt.Entities.Count.ToString() + \u0026#34; \u0026#34; + detailName + \u0026#34; records!\u0026#34;); //Total up each individual Line Detail record to determine Amount, Estimated Tax and Amount After Tax decimal ppu = 0; decimal t = 0; decimal ea = 0; for (int i = 0; i \u0026lt; lt.Entities.Count; i++) { //If no value in the returned fields, default to 0 ppu += ((Money)lt.Entities[i][\u0026#34;msdyn_amount\u0026#34;])?.Value ?? 0; t += ((Money)lt.Entities[i][\u0026#34;msdyn_tax\u0026#34;])?.Value ?? 0; ea += ((Money)lt.Entities[i][\u0026#34;msdyn_amount_after_tax\u0026#34;])?.Value ?? 0; } //From here, you can begin to apply any custom calculations needed to satisfy your individual requirements. //In this example, we simply pass the totals from above onto the relevant Line fields and then update the Line record. e1[\u0026#34;priceperunit\u0026#34;] = new Money(ppu); tracingService.Trace(\u0026#34;Quoted/Contracted Amount = \u0026#34; + ppu.ToString()); e1[\u0026#34;tax\u0026#34;] = new Money(t); tracingService.Trace(\u0026#34;Estimated Tax = \u0026#34; + t.ToString()); e1[\u0026#34;extendedamount\u0026#34;] = new Money(ea); tracingService.Trace(\u0026#34;Quoted/Contracted Amount After Tax = \u0026#34; + ea.ToString()); service.Update(e1); tracingService.Trace(e1.LogicalName + \u0026#34; updated successfully!\u0026#34;); } Some further explanation on this class may be required, given some additional quirks that PSA throws into the mix:\nThe code is designed to be Line Detail record agnostic, which is why a switch statement is needed to determine certain field values needed within the QueryByAttribute query. The Opportunity Line and Invoice Detail entities do not behave the same way as the Quote \u0026amp; Order Line entities, so these are specifically excluded. It would appear that the PSA application creates several, hidden Line Detail records in the application for every Line Detail record that a user creates. These blank records do not contain any of the expected pricing values. Therefore, to avoid potential issues, the QueryByAttribute filters specifically ensure these do not return and, also, ensures that only records marked as chargeable are brought back. Beyond that, most of the code should be self-explanatory for those familiar working with C# - however, let me know in the comments below if you have any queries.\nAll Change Ahead At the time of writing this post, we have just recently seen a major upgrade of PSA take place, from version 2 to version 3. This particular release saw a lot of change within PSA and, indeed, users of version 2 of the application need to specifically request Microsoft to allow them to upgrade to the newer version. This requirement is likely due to the risk of stuff breaking between versions. Now, I am hearing on the grapevine that another significant upgrade will be coming as part of the fall 2019 release. A reading of the release notes would seem to suggest this to be a major overhaul, likely involving the removal entirely of the current Microsoft Project embedded functionality. We will no doubt find out more about this soon, but I would caveat this entire post by saying it\u0026rsquo;s outlined fix could become redundant within months. All we can do for now is keep a keen eye on the release notes and see how things ultimately pan out.\nThe Community is Awesome In times of crisis new and exciting learning opportunities, you can always rest assured that the fantastic D365CE community will be on hand to assist. In this case, I must give thanks to the PSA wizard Antti Pajunen for being a reliable and authoritative steer on PSA and in helping to answer my questions relating to it. If you are working with PSA and are not already following his blog, then get over there today - you won\u0026rsquo;t be disappointed!\nI Could Talk All Day About This Stuff\u0026hellip; No, seriously, I could. Feel free to leave a comment below or contact me if you would like to find out how to implement a custom pricing solution within your D365CE system.\n","date":"2019-08-11T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/a-few-observations-on-using-custom-pricing-plugins-alongside-project-service-automation/","title":"A Few Observations on Using Custom Pricing Plugins Alongside Project Service Automation"},{"content":"Some of the new data import options that PowerApps provides us is nice. Perhaps the most significant area of innovation concerns the functionality available within Data integration projects. Full Power Query capability is now available to us when importing data, giving us a range of features that, previously, we would have dreamed of having. Now, we can do the following with only a few button presses:\nDynamically add on new columns, based on conditional logic. Convert data to a range of different types, such as numeric, boolean and date/time. Remove problematic rows of data or fix them by replacing them with custom values. Merge two separate data sources into a single, unified dataset. All told, this functionality provides a smooth runway for those who have worked previously with Power BI and are looking to turbo-charge their PowerApps journey. To build out your first Data integration project, navigate to the PowerApps portal and select the Data -\u0026gt; Data Integration option. From here, you can then click the New data integration project button to start bringing your data into the application:\nData imported in this fashion can then be straightforwardly loaded into a Common Data Service (CDS) entity of your choosing - either an existing or a brand new one. For those interested in finding out more about the capabilities included within Power Query, my recent series covering Microsoft Exam 70-778 delves into this very subject area on several occasions.\nRecently, I was experimenting with importing in some large - and very untidy - datasets into CDS, using the method outlined above. Little or no work had been carried out on the data itself before loading it up into the PowerApps portal. Thankfully, though, a lot of the tools available within the online Power Query editor were able to get the data into a more reasonable shape. All was going well until I began the process of importing the data into CDS. Rather than creating the entity/fields manually, I was feeling very lazy and instead chose the Load to new entity option. This action forces the application to go off and create the entity and my list of fields, inferring the required information from this based on my Power Query editor output. All was going well until I hit the following error message during the actual import stage:\nThe full error message is as follows:\nThere was an error while creating this entity. Details: Sql error: A validation error occurred. A string value provided is too long. CRM ErrorCode: -2147012607 Sql ErrorCode: -2146232060 Sql Number: 8152.\nUnfortunately, this was not an occasion where the error message indicated a clear way forward to fix. I initially suspected that the issue was down to individual rows being too long for inserting into the CDS database, but quickly discounted this after evaluating the data in question. In the end, the issue turned out to be staring me right in the face and required placing my Dynamics CRM/Dynamics 365 Customer Engagement hat firmly on. Those well versed with creating custom fields within this application will recall that there is a fixed limit on the length of a columns Display Name and (Logical) Name values - 50 and 41 respectively. When comparing this to the dataset I was working with, I observed something similar to what can be seen in the below example:\nAs this example demonstrates, we have an excessively long column name that breaches the enforced limit for custom fields. So, at this stage, our solution should be reasonably obvious - fix the offending column name!\nThe above example file will then get imported successfully, creating the entity and each of the fields successfully:\nPower Apps and the Common Data Service present new and exciting opportunities for individuals looking to build practical business applications. Fortunately, given that version 2 of the CDS is utilising the Dynamics CRM/Dynamics 365 Customer Engagement data, the knowledge and investments made into this product over many years are instantly transferrable across. As such, addressing errors such as the one described in this post become a lot easier, as all of this accumulated knowledge is not necessarily consignable to the scrapheap. Also, we can now leverage some VERY cool Power Query capability to ensure the smooth completion of any data integration or migration exercise.\n","date":"2019-08-04T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/a-validation-error-occured-data-integration-project-error-powerapps/","title":"\"A validation error occured\" Data Integration Project Error (PowerApps)"},{"content":"I\u0026rsquo;m not sure if anyone else has this problem, but I always have a bias towards working against what an IT system error message is telling me. Perhaps this is down to the numerous occasions where I\u0026rsquo;ve been presented with erroneous error messages, resulting in a solution that you could never have reasonably anticipated from the outset. As such, you can sometimes completely overlook when an error message is telling you clearly what the issue and solution is. Often, evaluating the problem carefully or taking a break from investigating it leads to faster resolutions and the preservation of your patience/sanity.\nI recently played victim to this very problem when attempting to deploy out an Azure Template project from Visual Studio 2019 to a resource group on Microsoft Azure. I kept getting the following message just as the script was starting its deployment:\nNew-AzureRmResourceGroup : Your Azure credentials have not been set up or have expired, please run Connect-AzureRmAccount to set up your Azure credentials.\nThis error confused me initially as I had, only moments ago, successfully connected up to Azure and created a Resource Group via the Visual Studio 2019 interface. In the end, after a bit of head-scratching and a brief spell down the rabbit hole, the solution turned out to be ridiculously straightforward:\nOpen a new PowerShell session. Run the Connect-AzureRmAccount cmdlet and, when prompted, login in using your Microsoft Azure credentials. Verify that the details of your subscription are correct after the cmdlet completes successfully, as indicated below: Now, when attempting to redeploy your Template project from Visual Studio, it should connect and deploy out successfully; assuming no further issues with the underlying template.\nI already alluded to this earlier, but it\u0026rsquo;s worth reiterating once again the importance sometimes of taking a step back and not over-complicating a potential IT issue out of the gate. As this example so excellently illustrates, adopting an objective standpoint and reading the error message can sometimes tell you everything you need to get a problem resolved promptly. Save yourself some effort early on by always reverting to Occam\u0026rsquo;s razor - namely, that the simplest solution is more than likely the correct one 🙂\n","date":"2019-07-28T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/dealing-with-your-azure-credentials-have-not-been-set-up-or-have-expired-error-message-in-a-azure-template-visual-studio-project/","title":"Dealing with \"Your Azure credentials have not been set up or have expired\" Error Message in a Azure Template Visual Studio Project"},{"content":"An area I\u0026rsquo;ve been doing a lot of work with so far this year is Azure Resource Manager templates. The primary driver behind this is to ensure our team has fully automated our development/deployment processes within Azure DevOps. The list of recent posts on this blog certainly reflects this fact, as we\u0026rsquo;ve seen:\nHow to configure streaming units when working with Stream Analytic jobs, Perform automated validation of Azure templates before deploying them out Reviewed some of the things to bear in mind when working with Azure Web App Application Settings alongside templates. As part of this, we\u0026rsquo;ve also been migrating across several Microsoft Flows to Logic Apps. This migration will ensure we can manage things centrally within Azure and also allow for these solutions to scale more effectively. Thankfully, this migration has been pretty straightforward, due in no small part to the excellent export capabilities that Microsoft Flow natively provides.\nHowever, there have been a few bumps in the road as we build these into an Azure Template. In particular, one Flow/Logic Apps caused us several initial difficulties. It contained a trigger that would fire whenever a new record gets INSERTed to an Azure SQL database. To prevent a strange issue where the trigger would fire for every record created, we implemented a Start time condition, as illustrated below:\nAs we incorporated the above Logic Apps within an Azure Template, we, therefore, had to account for this behaviour. Our proposed solution was to ensure that our Azure Template included a value that would always reflect the current deployment time. Initially, we attempted to use the utcNow Azure Template function to meet this requirement, as illustrated in the snippet below:\n\u0026#34;triggers\u0026#34;: { \u0026#34;When_an_item_is_created\u0026#34;: { \u0026#34;recurrence\u0026#34;: { \u0026#34;frequency\u0026#34;: \u0026#34;Minute\u0026#34;, \u0026#34;interval\u0026#34;: 5, \u0026#34;startTime\u0026#34;: \u0026#34;[utcNow()]\u0026#34; }, ... } } This approach didn\u0026rsquo;t work, as we would get the following error message:\nThe template function \u0026lsquo;utcNow\u0026rsquo; is not expected at this location. The function can only be used with parameter default value expresions\nFortunately, as we discovered, the Logic Apps Workflow Definition Language also has a large array of different functions available, including one that mirrors the above precisely. We, therefore, modified our Logic Apps template definition as follows:\n\u0026#34;triggers\u0026#34;: { \u0026#34;When_an_item_is_created\u0026#34;: { \u0026#34;recurrence\u0026#34;: { \u0026#34;frequency\u0026#34;: \u0026#34;Minute\u0026#34;, \u0026#34;interval\u0026#34;: 5, \u0026#34;startTime\u0026#34;: \u0026#34;utcNow()\u0026#34; }, ... } } However, it didn\u0026rsquo;t like this as well, as we got a different error message when deploying this out:\nThe template validation failed: \u0026lsquo;The template trigger \u0026lsquo;When_an_item_is_created\u0026rsquo; at line \u0026lsquo;1\u0026rsquo; and column \u0026lsquo;1485\u0026rsquo; is not valid: \\\\\\\u0026ldquo;The string was not recognized as a valid DateTime. There is an unknown word starting at index 0.\\\\\\\nWell, at least with a different error message, it was a good sign that we were making some progress! What baffled us at this particular juncture was the default return format of the utcNow function appeared to match exactly against what Logic Apps is expecting. The only main difference being that it included additional millisecond values - to ensure you have maximum precision, I guess. 😂 We did attempt to override the default formatting on several occasions but, each time, we kept getting the same error message above, which was frustrating. In the end, we had to resort to getting the current date/time as part of a parameter value at the top of our Azure template:\n\u0026#34;dateTimeNow\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;defaultValue\u0026#34;: \u0026#34;[utcNow(\u0026#39;yyyy-MM-ddTHH:mm:ssZ\u0026#39;)]\u0026#34; } (Note that Visual Studio may flag this up as an Unrecognised function name error; you can safely disregard this)\nThen, it is possible to reference this parameter value from directly within the Logic Apps definition, like so:\n\u0026#34;triggers\u0026#34;: { \u0026#34;When_an_item_is_created\u0026#34;: { \u0026#34;recurrence\u0026#34;: { \u0026#34;frequency\u0026#34;: \u0026#34;Minute\u0026#34;, \u0026#34;interval\u0026#34;: 5, \u0026#34;startTime\u0026#34;: \u0026#34;[parameters(\u0026#39;dateTimeNow\u0026#39;)]\u0026#34; }, ... } } This template will then deploy out successfully, grabbing the current date/time value and ensuring that it is populated correctly within the Logic Apps resource.\nOnce again, we must thank my colleague Andrew Bennett for coming up with this particular solution, which I thought would be useful to share as part of this weeks post. Hopefully, it may help someone else who has been banging their head against a wall with this particular problem 🙂\n","date":"2019-07-21T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/working-with-utcnow-within-an-azure-logic-app-resource-template/","title":"Working with utcNow within an Azure Logic App Resource Template"},{"content":"As you can probably tell from my recent post history (and from the talks I have had the pleasure of delivering so far this year), I am really into Azure Data Factory (ADF) in a massive way at the moment. It really can provide a lot of benefits for organisations who have a particular focus on DevOps and cost-optimisation and, what\u0026rsquo;s more, the product is fully compatible with Dynamics 365 Customer Engagement. Just be aware that you might need to devote some towards defining your entity mapping schemas and in putting in place creative solutions to handle some field mapping limitations. And, as a cloud solution that is continually evolving by Microsoft, it is not unreasonable to expect the occasional bug, gremlin or bizarre feature limitation, that you could not have anticipated would be present. However, I can say with some degree of confidence that the product is in a stable state for serious consideration as part of your next data integration project.\nA good example of the types of roadblocks alluded to in the previous paragraph can be found in a recent issue myself, and a colleague was having with a particular Copy Activity within ADF that I had authored. Upon execution, it would always throw the following error message whenever it was called from within our pipeline:\n{ \u0026#34;errorCode\u0026#34;: \u0026#34;2200\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;ErrorCode=UserErrorSqlBulkCopyInvalidColumnLength,\u0026#39;Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=SQL Bulk Copy failed due to received an invalid column length from the bcp client.,Source=Microsoft.DataTransfer.ClientLibrary,\u0026#39;\u0026#39;Type=System.Data.SqlClient.SqlException,Message=The service has encountered an error processing your request. Please try again. Error code 4815.\\r\\nA severe error occurred on the current command. The results\u0026amp;#44; if any\u0026amp;#44; should be discarded.,Source=.Net SqlClient Data Provider,SqlErrorNumber=40197,Class=20,ErrorCode=-2146232060,State=1,Errors=[{Class=20,Number=40197,State=1,Message=The service has encountered an error processing your request. Please try again. Error code 4815.,},{Class=20,Number=0,State=0,Message=A severe error occurred on the current command. The results\u0026amp;#44; if any\u0026amp;#44; should be discarded.,},],\u0026#39;\u0026#34;, \u0026#34;failureType\u0026#34;: \u0026#34;UserError\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;IncrementalCopyActivity\u0026#34; } The activity in question was reasonably straightforward and derived from Microsoft\u0026rsquo;s example on how to work with change tracking data within SQL, and we were using the following query to get our data from our linked SQL database:\nSELECT MyTableUID, MyField FROM [dbo].[MyTable] AS MT INNER JOIN CHANGETABLE(CHANGES [dbo].[MyTable], @{activity(\u0026#39;LookupLastChangeTrackingVersionActivity\u0026#39;).output.firstRow.SYS_CHANGE_VERSION}) AS CT ON MT.MyTableUID = CT.MyTableUID WHERE CT.SYS_CHANGE_VERSION \u0026lt;= @{activity(\u0026#39;LookupCurrentChangeTrackingVersionActivity\u0026#39;).output.firstRow.CurrentChangeTrackingVersion} These field values were then mapped across to another SQL database, with only a partial selection of fields from the MyTable table occurring:\nWe checked the most obvious things first before almost giving up in despair:\nRegenerated the mappings from scratch, in case the definitions were out of date. Double checked the underlying field types and verified they were of the same type/length. Recreated the data source and linked service from scratch - again, no dice. At this juncture, I ended up throwing in the towel and logging a support ticket with Microsoft, assuming it was a problem with ADF. But my very knowledgeable colleague Shuky Lee had other ideas. After doing some additional tinkering, she was able to get the pipeline working successfully by modifying the SQL query as follows:\nSELECT MT.MyTableUID, MT.MyField FROM [dbo].[MyTable] AS MT INNER JOIN CHANGETABLE(CHANGES [dbo].[MyTable], @{activity(\u0026#39;LookupLastChangeTrackingVersionActivity\u0026#39;).output.firstRow.SYS_CHANGE_VERSION}) AS CT ON MT.MyTableUID = CT.MyTableUID WHERE CT.SYS_CHANGE_VERSION \u0026lt;= @{activity(\u0026#39;LookupCurrentChangeTrackingVersionActivity\u0026#39;).output.firstRow.CurrentChangeTrackingVersion} Eagle-eyed SQL developers may have already spotted the subtle change, but to explain, it appears as if the query was unable to successfully determine whether to select the columns from MyTable (aliased as MT) or the Change Tracking version of the same table (aliased as CT). By fixing this error and ensuring that the MyTableUID and MyField are prefixed with the MT aliased value, the query can now correctly determine which table to \u0026ldquo;grab\u0026rdquo; the data from, and the pipeline completes successfully. I think me and Shuky were relieved that we were able to resolve this issue and get this piece of functionality working as intended. And, in a way, I was glad that it was an issue with my query as opposed to ADF itself, as that goes some way towards confirming my earlier comments about how stable and mature the product is 🙂\n","date":"2019-07-14T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/dealing-with-sql-bulk-copy-failed-due-to-received-an-invalid-column-length-from-the-bcp-client-errors-in-azure-data-factory/","title":"Dealing with \"SQL Bulk Copy failed due to received an invalid column length from the bcp client\" Errors in Azure Data Factory"},{"content":"For those of you out there who are fortunate enough to have a personal lab environment for tinkering about with all things Microsoft, there is a wealth of learning opportunities that this can afford. Typically, this will help to reinforce your knowledge within areas relevant to your current job role, but I often find myself learning about all sorts of weird and wonderful things that I would never have the courage to interact with during a normal working week, such as:\nActive Directory Domain management. Server upgrades (for example, I recently went through an upgrade from Windows Server 2015 to 2019 on one of my VM\u0026rsquo;s and, in the process, learned about how you need to run the adprep command when upgrading domain controllers) DHCP Hyper-V Managing SQL Server and on-premise Dynamics CRM/365 Customer Engagement instances Routing and Remote Access configuration Invariably, there will be a high degree of trial and error involved here, with the added benefit of having nobody shouting at you if you make a goof that could have catastrophic effects within a live production environment. As a trade-off, you may find yourself hitting a brick wall with frequent issues, with no effective recourse (except the most obvious one) to resolve your problems.\nAn excellent example of an issue like this can be found in some long-standing problems I was having with a Windows Routing and Remote Access VPN configuration. It would intermittently have problems connecting up successfully, leading to timed-out connections or error messages like the one displayed below within Windows 10:\nInspecting the Event Viewer logs with the VPN type option set to Automatically or Point to Point Tunnelling Protocol (PPTP) would produce more informative error messages, as indicated below:\nError Message: RoutingDomainID- {: No IP address is available to hand out to the dial-in client.\nError Message: RoutingDomainID- {00000000-0000-0000-0000-000000000000}: CoId={4B23BE14-CCDF-4D5A-A44D-EAE70475D1A0}: The user DOMAIN\\User connected to port VPN4-223 has been disconnected because no network protocols were successfully negotiated.\nIn this particular setup, the machine configured with the Routing and Remote Access server had two network adapters - one being used by a Hyper-V adapter \u0026amp; for DHCP and a separate one, with an assigned IP from a different routers DHCP server. Based on this fact and the error messages above, I can only assume the server was having difficulty obtaining an IP address for any new VPN clients from the DHCP server on the same machine. Fortunately, there is a way in which we can tell the Routing and Remote Access service which IP address(es) to assign to new VPN connections. And, thankfully, the steps involved are straightforward:\nOpen the Routing and Remote Access Microsoft Management Console (MMC) snap-in. Right click on the server name and select Properties. Navigate to the IPv4 tab. Under the IPv4 address assignment error, ensure that the Static address pool option is selected and then press the Add\u0026hellip; button. Based on your DHCP configuration, enter a range of IP addresses that will be assigned to clients connecting to the machine. In the example screenshot below, we have allocated six addresses from the range 192.168.89.150 through to 192.168.89.155. Adjust the range to suit your particular needs. Make sure you select a range that isn\u0026rsquo;t already dished out or could be consumed by other clients: Press OK and then Apply to confirm the assignment. Now, when you attempt to reconnect to the VPN, it should work as intended:\nWhether the suggested solution in this article is suitable for a production environment, I cannot advise further on, but hopefully, the steps here might help somebody else who has come across the same issue when tinkering around within a lab/test environment 🙂\n","date":"2019-07-07T00:00:00Z","image":"/images/WindowsServer-FI.png","permalink":"/resolving-no-ip-address-is-available-to-hand-out-to-the-dial-in-client-vpn-errors-windows-server-routing-and-remote-access/","title":"Resolving \"No IP address is available to hand out to the dial-in client.\" VPN Errors (Windows Server Routing and Remote Access)"},{"content":"I\u0026rsquo;ve extolled the virtues of Application Insights previously on the blog, as I believe it is a nice solution that can provide valuable intelligence concerning your web applications. Whether you are looking to extend the solution to capture additional properties relating to your users, leverage the in-built availability testing features to receive alerts whenever your application is down or incorporate it as part of your existing Azure deployment templates, I am confident that Application Insights can fit multiple business needs. Indeed, it can offer comparable, if not superior, functionality when compared with tools such as Google Analytics. A large part of this comes down to the flexible data extraction options within the tool as standard, which allows you to export data for consumption as part of a Stream Analytics Job or to quickly generate M query code snippets for use within Power BI Desktop, to allow you to build out an engaging reporting solution. The first of these options is particularly appealing and a route I have been exploring with a keen interest in the past few weeks. Via the handy Export options available within the Application Insights Log Analytics window, it is possible to get this code easily after you have defined your query:\nThe example code that is generated can be found below:\n/* The exported Power Query Formula Language (M Language ) can be used with Power Query in Excel and Power BI Desktop. For Power BI Desktop follow the instructions below: 1) Download Power BI Desktop from https://powerbi.microsoft.com/desktop/ 2) In Power BI Desktop select: \u0026#39;Get Data\u0026#39; -\u0026gt; \u0026#39;Blank Query\u0026#39;-\u0026gt;\u0026#39;Advanced Query Editor\u0026#39; 3) Paste the M Language script into the Advanced Query Editor and select \u0026#39;Done\u0026#39; */ let AnalyticsQuery = let Source = Json.Document(Web.Contents(\u0026#34;https://api.applicationinsights.io/v1/apps/f07da591-19b0-4b0d-821a-908f0cc3d5ab/query\u0026#34;, [Query=[#\u0026#34;query\u0026#34;=\u0026#34;union pageViews,customEvents | where timestamp between(datetime(\u0026#34;\u0026#34;2019-06-29T09:00:00.000Z\u0026#34;\u0026#34;)..datetime(\u0026#34;\u0026#34;2019-06-30T09:00:00.000Z\u0026#34;\u0026#34;)) | summarize Users=dcount(user_Id) by bin(timestamp, 1h) | order by timestamp asc | render barchart \u0026#34;,#\u0026#34;x-ms-app\u0026#34;=\u0026#34;AAPBI\u0026#34;,#\u0026#34;prefer\u0026#34;=\u0026#34;ai.response-thinning=true\u0026#34;],Timeout=#duration(0,0,4,0)])), TypeMap = #table( { \u0026#34;AnalyticsTypes\u0026#34;, \u0026#34;Type\u0026#34; }, { { \u0026#34;string\u0026#34;, Text.Type }, { \u0026#34;int\u0026#34;, Int32.Type }, { \u0026#34;long\u0026#34;, Int64.Type }, { \u0026#34;real\u0026#34;, Double.Type }, { \u0026#34;timespan\u0026#34;, Duration.Type }, { \u0026#34;datetime\u0026#34;, DateTimeZone.Type }, { \u0026#34;bool\u0026#34;, Logical.Type }, { \u0026#34;guid\u0026#34;, Text.Type }, { \u0026#34;dynamic\u0026#34;, Text.Type } }), DataTable = Source[tables]{0}, Columns = Table.FromRecords(DataTable[columns]), ColumnsWithType = Table.Join(Columns, {\u0026#34;type\u0026#34;}, TypeMap , {\u0026#34;AnalyticsTypes\u0026#34;}), Rows = Table.FromRows(DataTable[rows], Columns[name]), Table = Table.TransformColumnTypes(Rows, Table.ToList(ColumnsWithType, (c) =\u0026gt; { c{0}, c{3}})) in Table in AnalyticsQuery With clear instructions provided, this can be quickly added into Power BI Desktop\u0026hellip;but you do have to make an important choice concerning your chosen authentication method to access Application Insights. For development/testing purposes, it is possible to use an Organizational account for authentication, provided that the Azure Active Directory user in question has been granted the relevant permissions at the subscription/resource level to interact with Application Insights. However, this is not recommended as a solution beyond these realms, for the following reasons:\nIf a user accounts password ever needs to change, due to enforced password policy, then you will need to update any credentials within Power BI Online manually. For multiple reports, this could take a considerable amount of time to do each month. If the user account is ever disabled or their permissions revoked, then likewise, you would need to go through and update every affected report. The user account in question will likely have access to other services or privileged access levels; therefore, using this does not conform to the \u0026ldquo;least privileged\u0026rdquo; account security principle. With this in mind, I would advise that you take advantage of the API access capabilities within the tool and generate an API key for the service; then, you can modify the authentication method accordingly to use this API key. There is a detailed blog post from Phidiax that goes through each of the steps involved to get this working successfully within Power BI Desktop using Anonymous authentication, however, for some strange reason, this causes the following error when your report is deployed onto Power BI Online:\nPerhaps Microsoft has changed something in the backend since 2017, which is why this no longer works. The way to get around this issue is twofold:\nFirst, leave your extracted M code query from Application Insights unaltered. For authentication methods, ensure that Basic is selected and populate the User Name field with the API Key value generated from Application Insights, as indicated in the screenshot below. As part of this, you should also ensure that the Privacy Settings for your data source are set appropriately.\nIt\u0026rsquo;s nice that a solution is available, to allow us to work with API Keys when connecting to Application Insights from Power BI. Unfortunately, I cannot take full credit for it, and must instead give all thanks to my esteemed developer colleague Andrew Bennett for figuring out this particular issue. Thanks, Andrew! 🙂\n","date":"2019-06-30T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/utilising-the-application-insights-api-key-with-power-bi/","title":"Utilising the Application Insights API Key with Power BI"},{"content":"A few weeks ago on the blog, we saw how it was possible to configure Azure Active Directory (AAD) Security Groups as an access mechanism for your Azure SQL Databases. Depending on how this feature is utilised, it has enormous potential to reduce the amount of administrative effort involved when managing access to your databases at scale. Also, the solution can even help to increase security, by ensuring your database users are authenticating with their AAD accounts instead of separate/shared accounts to carry out their daily tasks.\nA natural consequence of introducing this functionality is that you may need to modify how you deploy your database changes if you are using source control or a Visual Studio database project as a central repository for your entire database schema. Typically, you would use the SQL Server built-in admin account or a similarly privileged account to create all Data Manipulation Language (DML), Data Definition Language (DDL) etc. components during a data-tier application package (DACPAC) deployment. However, in this case, any attempt to create an EXTERNAL PROVIDER user account will likely lead to the following error occurring:\nError SQL72014: .Net SqlClient Data Provider: Msg 33159, Level 16, State 1, Line 1 Principal \u0026lsquo;My AAD Security Group\u0026rsquo; could not be created. Only connections established with Active Directory accounts can create other Active Directory users.\nFortunately, in this case, the error message is pretty helpful and, if deploying your database changes out via an Azure DevOps Pipeline, we can modify our build pipeline to accommodate this change, using the Active Directory - Password Authentication Type option:\nSo far, so good - however, when we attempt to run this deployment task through, we get the following error message\nAppearances can be deceiving on this one - the error would seem to suggest that the incorrect username or password values have been supplied, but even after triple-checking these values and even hardcoding them in, the same error still occurs. Only after some research online and a dig through the following Visual Studio Developer Community post do we discover that the current Azure SQL Database deployment task is not actually supported for use alongside AAD credentials. This is somewhat confusing, given that it is there as an available option currently. It looks as if this is something that Microsoft is actively looking into but, in the meantime, courtesy of a Stack Overflow answer from Murray Foxcroft, we can instead use a connection string value on the Azure SQL Database deployment task. The steps involved here are pretty straightforward:\nFirst, build out your connection string, using the example below as your template; replace any values surrounded by chevrons (\u0026lt;\u0026gt;) with the correct ones for your database/server: Server=.database.windows.net;Initial Catalog=;Persist Security Info=False;User ID=;Password=;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Authentication=Active Directory Password Next, on the Variables tab of your pipeline, create a new secret variable called sql-connstring and populate it with the connection string value from the previous step, as indicated below: On the Tasks tab, navigate to your Azure SQL Database deployment task, and change the Authentication Type value to Connection String. Then, enter $(sql-connstring) into the Connection String field. It should resemble the screenshot below if done correctly: From there, you should be good to go - connections to the database should authenticate successfully, and any EXTERNAL PROVIDER objects will get created successfully 🙂\nIt\u0026rsquo;s bizarre that the current Active Directory - Password option does not work at all, particularly given that its equivalent option for database deployments within Visual Studio works without issue. From the sounds of it, the issue is being actively looked at by Microsoft and we should (hopefully!) have an updated version of the Azure SQL Database deployment task available soon that works correctly. In the meantime, it\u0026rsquo;s good to know that there is an easily implementable workaround available, that can be secured appropriately via the Azure DevOps Pipelines interface.\n","date":"2019-06-23T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/resolving-aadsts50126-invalid-username-or-password-errors-during-azure-sql-database-deployment-task-azure-devops-pipelines/","title":"Resolving AADSTS50126: Invalid username or password Errors During Azure SQL Database Deployment Task (Azure DevOps Pipelines)"},{"content":"Typically, when working with a database/application to programmatically create new records, it is desirable for us to return some information relating to any newly created records; this would typically take the form of a Globally Unique Identifier (GUID), primary key value or something else that uniquely identifies the record in question. Doing this may be useful for several reasons:\nBy returning this type of data, we can implicitly confirm that the record has been created successfully, as such a value would not exist otherwise. There may be some additional operations that need to be carried out against the new record, post-creation. By returning its unique identifier, we can carry out such activities without any further impediment and do not need to rely on a separate retrieval operation to obtain this property. The unique identifier may be required elsewhere within our application. For example, we may want to store the unique identifier within a separate system, to satisfy any current/future data integration requirements. Dynamics 365 Customer Engagement (D365CE) is no different in this regard, and you traditionally may have had to devote some effort to accommodate this scenario. Fortunately, if you are working with tools such as Jason Lattimer\u0026rsquo;s CRM REST Builder, a lot of the hassle involved here becomes virtually non-existent, as you can very quickly generate code snippets that not only perform your entire create record operation but also return the records GUID after the transaction has completed successfully. An example of the type of code that the tool can generate is seen below:\nvar entity = {}; entity.name = \u0026#34;CRM Chap\u0026#34;; var req = new XMLHttpRequest(); req.open(\u0026#34;POST\u0026#34;, Xrm.Page.context.getClientUrl() + \u0026#34;/api/data/v9.1/accounts\u0026#34;, true); req.setRequestHeader(\u0026#34;OData-MaxVersion\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;OData-Version\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;Accept\u0026#34;, \u0026#34;application/json\u0026#34;); req.setRequestHeader(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json; charset=utf-8\u0026#34;); req.onreadystatechange = function() { if (this.readyState === 4) { req.onreadystatechange = null; if (this.status === 204) { var uri = this.getResponseHeader(\u0026#34;OData-EntityId\u0026#34;); var regExp = /\\(([^)]+)\\)/; var matches = regExp.exec(uri); var newEntityId = matches[1]; } else { Xrm.Utility.alertDialog(this.statusText); } } }; req.send(JSON.stringify(entity)); In this example, lines 14-17 of the snippet handle the process of returning the newly created records GUID, with some specific steps required to ensure this renders nicely. This is because D365CE returns the newly created record as part of a complete URL by default (e.g. the above example, unmodified, would return something like https://mycrminstance.crm11.dynamics.com/api/data/v9.1/accounts(711cd182-0f90-e911-a97b-002248014773)). Therefore, a Regular Expression (RegEx) is used to strip out all of the URL components and, instead, supply us with the record GUID as part of the newEntityId variable. VERY handy and another reason why we should all be thanking Jason for building such an amazing tool.\nWhile this is all well and good if you are working with the Web API using JScript, there may be other programming languages that you wish to utilise alongside D365CE\u0026rsquo;s Web API. A natural example of this is C# and, fortunately, there are also code samples freely available to help simplify development. However, these samples do not implement the same kind of transformation that Jason\u0026rsquo;s tool very kindly does by default. So how can we replicate this functionality using C#? Thanks mostly to the excellent Regular Expression engine built directly into .NET, the task of porting across similar functionality to C# is greatly simplified. By adding a using System.Text.RegularExpressions; statement to the top of our C# class file, we can then use the following three lines of code to extract out the CRM record GUID as a string value (where entityUri is the same URL example shown above):\nRegex rx = new Regex(@\u0026#34;\\(([^)]+)\\)\u0026#34;); Match match = rx.Match(entityUri); string entityID = match.Value; From there, we could then look to cast the entityID string into a Guid data type to, for example, associate the record to another within the application, update our external application system, print it out to a log file/console window\u0026hellip;the possibilities are endless! What matters the most is that we can get something unique that identifies our newly created D365CE record and all it took was a few lines of code in C# - which is always nice. 🙂\n","date":"2019-06-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/extract-new-record-guid-from-dynamics-365-customer-engagement-create-web-api-request-c/","title":"Extract New Record GUID from Dynamics 365 Customer Engagement Create Web API Request (C#)"},{"content":"Balancing the needs for security and convenience for user accounts within businesses/organisations can be a real challenge. Typically, it is preferred that individuals have separate accounts, each with their own unique and complex passwords, for the various services that they need to access as part of their daily work. In reality, such an approach will almost always lead to poor password security, increasing the risk for an attacker to more easily penetrate an environment using a single password. In this case, looking at a solution that supports Single Sign-On (SSO) capability, such as Azure Active Directory (AAD), can help not just minimise the proliferation of multiple, weak login identities, but allow you to enforce consistent policies for password complexities, expiry and also introduce additional security steps via Multi-Factor Authentication (MFA). And, what\u0026rsquo;s even better, these solutions can increase convenience for end users. For me, that is undoubtedly the best kind of solution. 🙂\nMicrosoft, as you may expect, embed SSO capability within a vast majority of their products, and Azure SQL Server/Database is no exception to this rule. With minimal configuration, you can add AAD users onto an Azure SQL database and manage them as if they were a standard SQL Server user account. Traditionally, this would be done by defining each user account that needs access at the database level - not so bad if you are dealing with a handful of users, but what if you need to grant hundreds of users access to a single Azure SQL database? Also, if you are working across multiple AAD tenants (to, for example, manage changes across your various environments), the solution starts to fall on its face straight away. This is because you must specify the full, unique user name of each AAD account that you want to add to the database - which, as we have seen previously on the blog, will always be different, even if you are using guest accounts across multiple tenants. Fortunately, there is a way to get around this, using what looks like an undocumented feature. AAD Security Groups can be added as login principals to an Azure SQL database in the same manner as AAD user accounts and - most critically - granted the required permissions for your database. For managing database access at scale, it really is a worthwhile feature to have at our disposal and, in addition, can be extremely helpful if you are deploying your database onto separate AAD tenants, as all you need to do is simply ensure a security group with the same name exists within each tenant with its required members.\nTo get started using this feature, you need first to ensure that you have the following setup within your Azure environment:\nRather obviously, you need an Azure SQL Server resource with an associated database. In addition, you need to make sure that you have enabled an Active Directory admin at the server level - this will typically be a two-part step, where you first define an appropriate AAD account for this and then associate it with your server instance. An appropriate security group object must exist already within the AAD tenant. The example screenshot below illustrates the settings that are needed for this, although you can (optionally) add in any members once the security group has been created: When all of the above is sorted, login to your SQL Server instance using the Global AAD Administrator account and run the following script against your target database - be sure to change the user account name so that it matches exactly against the security group that exists within your AAD tenant:\nCREATE USER [AAD SSO Test] FROM EXTERNAL PROVIDER WITH DEFAULT_SCHEMA = dbo GO GRANT CONNECT TO [AAD SSO Test] GO EXEC sp_addrolemember \u0026#39;db_datawriter\u0026#39;, \u0026#39;AAD SSO Test\u0026#39;; GO EXEC sp_addrolemember \u0026#39;db_datareader\u0026#39;, \u0026#39;AAD SSO Test\u0026#39;; GO If you get an error message similar to the one below, then you may need to check that the user name matches exactly against what is within the AAD tenant:\nWe can verify that the account has been created successfully by running the following query, which also confirms that the account type is that of EXTERNAL_GROUP as opposed to EXTERNAL_USER, for users created from a single AAD account:\nSELECT name, type_desc, authentication_type_desc FROM sys.database_principals WHERE type_desc = \u0026#39;EXTERNAL_GROUP\u0026#39; [snippet id=\u0026ldquo;778\u0026rdquo;]\nAll that\u0026rsquo;s left is to perform a test login using SSMS; in this case, we use a secondary janedoe account that has been added to the Security Group on Azure:\nTo login successfully, we need to make sure a couple of options are configured within SSMS:\nNaturally, we need to ensure that the correct authentication method is selected, which is dictated by the overall configuration of your AAD tenant: If you are using Active Directory Federation Services (ADFS), then select the Active Directory - Integrated option. The user will not be prompted to enter any credentials and should instead pick this up from the local domain account. If the account in question is enabled for MFA, then select Active Directory - Universal with MFA Support and enter the email address of the account in question. This will launch a separate pop-up window, with the appropriate MFA challenge (where applicable) after logging in. If neither of the above applies or if in doubt, then select the Active Directory - Password option, entering your login details in the appropriate fields. Since the account will be created as a contained database user, we need to select the Connection Properties tab and ensure the name of the database we are connecting to is entered in the Connect to database field - so, in the example below, the database name used is JGTest: All being well, our janedoe account can log in without issue and, per the earlier script, should have full read/write access to the entire dbo schema in the database:\nConclusions or Wot I Think Managing SSO at scale can prove to be a real challenge, so having features like the one described in this post can be a real boon in neatly managing this complexity. As mentioned previously, it doesn\u0026rsquo;t appear that the feature is well-documented either, which is why I thought I would share it out to increase awareness of it. With minimal setup involved and - most crucially - without the need to perform frequent updates to your database whenever an individual leaves an organisation, you can implement a role-based security model for your Azure SQL databases, that can be freely modified within the Azure portal at will with zero-disruption to your database users or application.\n","date":"2019-06-09T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/managing-azure-sql-sso-with-azure-active-directory-security-groups/","title":"Managing Azure SQL SSO with Azure Active Directory Security Groups"},{"content":"Regular readers of the blog may have noticed that the past couple of posts has been very Azure Data Factory V2 (ADF) focused, particularly in the context of Dynamics 365 Customer Engagement (D365CE) and the Common Data Service (CDS). I\u0026rsquo;ve provided an on overview of the different connectors available today for both of these applications and also discussed some of the hurdles you may find when it comes to mapping data across to your desired fields within D365CE/CDS. Today\u0026rsquo;s post naturally follows on from these subjects, as we aim to take a look at some of the mapping limitations currently in-place when importing data using the D365CE/CDS connectors.\nAs highlighted on Microsoft\u0026rsquo;s official article on the subject, the vast majority of field types available within D365CE/CDS can be mapped to a destination (or \u0026ldquo;sink\u0026rdquo; location). These, for the most part, involve the standard data types that .NET developers will be familiar working with and if, for example, you want to map data from a SQL Server data source into D365CE/CDS, you should have little difficulty implementing these mappings. However, two important attribute types within D365CE/CDS are not supported for mapping, under any circumstance. And, although they may not cause you difficulties in all but the most complex integration scenarios, if you are working with out of the box system entities using pre-built relationships or are wanting to perform record assignment changes within your ADF pipelines, you may find yourself hitting a brick wall. The next two sections look at these attribute types in greater detail, as well as offering a suggested route to work with them alongside ADF.\nAttributeType.Customer This attribute type is more commonly known as the Customer lookup field, a special kind of relationship that allows a user to associate either a Contact or an Account record with another Entity record. This data type has traditionally been used extensively by various system entities within the application and, more recently, has been exposed out for system customizers to create as well. Further details about this special data type can be found on the Microsoft Docs website. Given its somewhat strange behaviour, from an entity-relationship standpoint at least, we can guess as to why this data type is not supported within ADF; perhaps because we have no way of telling CRM, as part of a copy activity, whether a supplied entity name or GUID belongs to a specific entity type. However, this limitation may cause some frustration, particularly if you are performing a migration from an on-premise Dynamics CRM instance to online, and you wish to keep in place any existing relationships from your source system.\nIf you are completely set on using this field as intended and to import data into it (which makes sense, given that is relied upon for many out of the box components within D365CE), then a possible workaround can be implemented with a mixture of D365CE/CDS entity customisations and a Microsoft Flow to perform most of the heavy lifting:\nFirst, create two temporary lookup fields on the entity that you wish to import Customer lookup data into it, to both the Account and Contact entities respectively: Within your ADF pipeline flow, you will then need to map the GUID values for your Account and Contact fields to the respective lookup fields created above. The simplest way of doing this is to have two separate columns within your source dataset - one containing Account GUID\u0026rsquo;s to map and the other, Contact.\nThen, finally, you can put together a Microsoft Flow that then performs the appropriate mapping from the temporary fields to the Customer lookup field. First, define the trigger point for when your affected Entity record is created (in this case, Contact) and add on some parallel branches to check for values in either of these two temporary lookup fields: Then, if either of these conditions is hit, set up an Update record task to perform a single field update, as indicated below if the ADF Account Lookup field has data within it (swap the value of the Company Name Type to contacts for the second branch): You can download a copy of the above flow using this link, thereby allowing you to import and modify it within your environment quickly.\nAttributeType.Owner This is one limitation that, I think, has the potential to cause the most issues and, similar to the Customer field type, can be explained due to the twin nature of this field type. The Owner field fulfils a self-evident purpose within the application - namely, to tell us which particular User or Team within the applications owns a record. It, therefore, behaves similarly as the already discussed Customer field and, likewise, there is no clear way to tell whether a supplied GUID or Name value is for one entity or the other. And, as you may expect, the workaround solution is largely similar:\nCreate two lookup fields on your target entity to both the User and Team entity, respectively: Then, on the ADF side, do the same as the Customer field type and segregate out your User/Team record GUIDs into separate fields. Finally, we can redevelop the previous Flow to trigger using these new fields instead. A copy of this flow can be downloaded here; it is a carbon copy of the example shown earlier, with some minor alterations. Conclusions or Wot I Think The limitations currently in place are entirely understandable, once you become familiar with the unique behaviour of these field types when compared to others within D365CE/CDS. But they are not necessarily insurmountable. As the CDS connector for Flow has demonstrated, by supplying an additional, valuable piece of information - the specific name of the entity that you wish to map to - you can provide the application with all of the information it needs to correctly associate the records together during an update or import operation. Having this functionality replicated within an ADF copy data activity would, at a minimum, introduce a very desirable piece of functionality. Fortunately, I\u0026rsquo;m not the only one to think so, as there is a User Voice request out currently requesting this specific feature. I would encourage you to upvote this if you would also like to see this feature added in future. Notwithstanding these two specific limitations, I do still believe ADF is worth consideration when attempting to solve more sophisticated data integration requirements, whether they involve D365CE/CDS or not. The solution provides a rich array of cloud-hosted, consumption-modelled capability to allow you to quickly and cheaply implement Extract, Transform \u0026amp; Load (ETL) processes between a wide variety of different locations.\n","date":"2019-06-02T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/sink-limitations-with-the-dynamics-365-customer-engagement-common-data-service-connector-for-azure-data-factory/","title":"Sink Limitations with the Dynamics 365 Customer Engagement/Common Data Service Connector for Azure Data Factory"},{"content":"When contemplating migrating data into Dynamics 365 Customer Engagement (D365CE), a necessary task will involve determining the appropriate data field mapping that needs to occur from any existing system you are working with. Whether this means an on-premise version of the application (or its earlier iteration, such as Dynamics CRM), a competitor product or even a SQL Server database instance, this exercise is essential for several reasons, as it will:\nAct as a proxy Fit/Gap exercise for any potential work required within the application to add missing fields for data capture. Enable you to begin the necessary data template preparations, based on the fields needed from your source system and D365CE. Allow organisations to determine whether specific database/application fields are still required at all and, if not, simplify things by removing them entirely. As part of this work as well, it is also likely that you will determine which tool you would like to use to manage the whole data import exercise. For simple requirements, which require minimal data transformation activities, the out of the box data import facility will more than likely suit your requirements. Alternatively, if you are feeling somewhat confident with your Power Query capabilities, you could even take the Common Data Service import tool for a whirl (which I anticipate which eventually replace the out of the box experience). For more complex Extract, Transform and Load (ETL) requirements, though, it is likely that you will need to turn to more sophisticated tools available from vendors such as Scribe Online or Kingswaysoft. Another viable option for consideration is Azure Data Factory V2 (ADF) which, as we have seen previously on the blog, has a fully supported connector available and ready to use as an import destination or data source. Although there is likely some development time that needs to be invested into developing a solution using this product, it is by far a much cheaper alternative compared to the vendor solutions already discussed and, for development teams actively embracing DevOps, provides a far more significant, integrated experience.\nWhen first getting familiar with the D365CE connector within the service, a necessary step will involve importing the data schema - basically, the list of fields and their appropriate data types - from the application. This process is straightforward, thanks to the Import schema option available to us when configuring our dataset, which produces the following schema when connecting to the Account entity within the application:\nAnd, as with everything within ADF, our schema definition is also defined and viewable as a JSON object behind the scenes, as indicated in the (adapted) example below:\n{ \u0026#34;name\u0026#34;: \u0026#34;CDS_Account\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;linkedServiceName\u0026#34;: { \u0026#34;referenceName\u0026#34;: \u0026#34;CDSLinkedService\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;LinkedServiceReference\u0026#34; }, \u0026#34;folder\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;CDS\u0026#34; }, \u0026#34;type\u0026#34;: \u0026#34;DynamicsEntity\u0026#34;, \u0026#34;structure\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Boolean\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;openrevenue_state\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;opendeals\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;address1_postalcode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, ... ], \u0026#34;typeProperties\u0026#34;: { \u0026#34;entityName\u0026#34;: \u0026#34;account\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/datasets\u0026#34; } This circumstance is all well and good if we intend to import data into all of these fields within our pipeline. However, if we only attempt a partial field mapping and miss out any fields not defined within the data source schema, your pipeline will generate a \u0026ldquo;Column defined in the sink schema is not mapped in the mapping\u0026rdquo; validation error and it will not save/run successfully:\nIn addition to this, as you may have noticed already, the connector has a very specific list of fields that it returns and, more crucially, not all of them - only 62 out of the potential 153 fields on the Account entity by default - and no custom fields which may have been created within the instance (even those marked as Business Required). So does this mean that ADF is limited to only importing data into the 62 fields provided by default using the Import schema option? Thankfully, the answer to both of these is a resounding \u0026ldquo;No!\u0026rdquo; and we can customise the schema of our D365CE connector to our heart\u0026rsquo;s content, thereby allowing us:\nDelete any field not required as part of our mapping by selecting the appropriate fields and then pressing the Delete button. For example, the screenshot shows how it is possible to remove the merged and address2_addresstypecode fields from within the ADF interface: Alternatively, you can remove the appropriate mappings from within the underlying JSON definition. Therefore, an updated version of the above example would, therefore, resemble the following after modification: { \u0026#34;name\u0026#34;: \u0026#34;CDS_Account\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;linkedServiceName\u0026#34;: { \u0026#34;referenceName\u0026#34;: \u0026#34;CDSLinkedService\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;LinkedServiceReference\u0026#34; }, \u0026#34;folder\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;CDS\u0026#34; }, \u0026#34;type\u0026#34;: \u0026#34;DynamicsEntity\u0026#34;, \u0026#34;structure\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;openrevenue_state\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;name\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;opendeals\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;address1_postalcode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, ... ], \u0026#34;typeProperties\u0026#34;: { \u0026#34;entityName\u0026#34;: \u0026#34;account\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/datasets\u0026#34; } Add on any missing system or custom fields that are required as part of our import process using the New column button. When defining each field, it is important that the Name value is used for each field and that the correct data type is selected. The example screenshot below provides an example of all the possible field types that can be added on in this manner: And, for those who prefer working with code, the JSON definition for a schema containing only these fields would look like this: { \u0026#34;name\u0026#34;: \u0026#34;CDS_Account\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;linkedServiceName\u0026#34;: { \u0026#34;referenceName\u0026#34;: \u0026#34;CDSLinkedService\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;LinkedServiceReference\u0026#34; }, \u0026#34;folder\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;CDS\u0026#34; }, \u0026#34;type\u0026#34;: \u0026#34;DynamicsEntity\u0026#34;, \u0026#34;structure\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;aging90\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Decimal\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Currency\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;overriddencreatedon\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;DateTime\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Date and Time\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;exchangerate\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Decimal\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Decimal Number\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;address1_longitude\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Double\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Floating Point Number\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;entityimage\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Default Image. String must be Base64\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;originatingleadid\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Guid\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Lookup\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;description\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Multiple Lines of Text\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;customertypecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Option Set\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;accountid\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Guid\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Primary Key\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;telephone1\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Single Line of Text\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Status\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;statuscode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Status Reason\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;versionnumber\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int64\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Time Stamp\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;donotemail\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Boolean\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Two Options\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;stageid\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Guid\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Unique Identifier\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;opendeals\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;D365CE Data Type = Whole Number\u0026#34; } ], \u0026#34;typeProperties\u0026#34;: { \u0026#34;entityName\u0026#34;: \u0026#34;account\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/datasets\u0026#34; } Once customised in this manner, the pesky validation error message shown earlier will vanish entirely and you open up a whole range of additional functionality, such as being able to:\nMap to any custom field defined within your D365CE entities. Define the Globally Unique Identifier (GUID) of the record imported into the application to, for example, match the same GUID value it has within your source dataset. Utilise the overridencreatedon field to set a custom value for the Created On field to match its original value within your source dataset. This can be particularly useful as part of a migration from an on-premise Dynamics CRM deployment, and you are reliant on this field for reporting or for any custom business logic within the application. With a bit of tinkering, ADF once again, I feel, proves its worth and is a tool I would recommend you take a look at for any data integration requirements involving D365CE\n","date":"2019-05-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/working-with-custom-dynamics-365-customer-engagement-dataset-schemas-in-azure-data-factory-v2/","title":"Working with Custom Dynamics 365 Customer Engagement Dataset Schemas in Azure Data Factory V2"},{"content":"I must admit, first of all, that I missed the announcement around this at the time. However, after doing some playing around with the new Solution Checker feature within the Common Data Service, after a very informative User Group presentation from MVP extraordinaire Andrew Bibby, I was very quickly brought up to speed and thought this whole topic would be a great subject for this week\u0026rsquo;s blog post.\nInvolving custom code, either via a C# plug-in or a JScript form function, can throw open a whole smorgasbord of additional functionality to meet your business requirements within Dynamics 365 Customer Engagement. Although I am a huge proponent of extending the application in this manner, developers should always fully evaluate the range of out of the box features available within the application before they start resorting to code. Business Rules are a good example of a feature that may get overlooked in this regard and, with the range of options they offer, are much easier to maintain/support when compared with a custom JScript function. Custom code can also become a nightmare as part of any upgrade testing between major versions, meaning that you have to dedicate additional time for testing to ensure that nothing breaks as a consequence. Chiefly, this is because Microsoft will, from time to time, remove API features entirely or replace them with \u0026ldquo;better\u0026rdquo; routes to achieve the same end. Microsoft is typically very good at signposting when a feature is going onto the scrapheap, and you will almost always be given the period between at least two major versions of an application release to address any issues. It\u0026rsquo;s therefore crucial for developers to keep themselves abreast of any relevant release notes and take proactive steps to fix their code whenever a particular method, endpoint etc. is deprecated; and, given that I missed this one entirely, I include myself in this category as well 🙂\nAs part of version 9 of Dynamics 365 Customer Engagement, the primary method for accessing various properties via JScript on a form within the application - Xrm.Page - is now deprecated. Developers must now instead access all properties relating to the page by passing the executionContext object to their functions. Then, with some additional re-tinkering around, the entire range of form properties we are used to working with are fully exposed. I can imagine the Xrm.Page object is used across a whole range of different deployments today and, therefore, has the potential to cause some future anguish if not properly addressed. But how exactly do you go about addressing this and getting your code up to spec? Let\u0026rsquo;s assume we have the following JScript function for the Contact entity that was developed a few years ago using the Xrm.Page object:\nfunction changeAddressLabels() { Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } The function is straightforward, and long-standing readers of the blog may remember this bit of code, which modifies the display name of the composite control Address 1/Address 2 fields on the Contact to become more regionally relevant for UK users of the application. When we look at the configuration of the function in more detail within an existing Dynamics 365 Customer Engagement deployment, we observe that a) it accepts and receives no parameter values and b) is configured on the Main form for the Entity as an OnLoad function, with the following properties:\nTo get things updated to use the new executionContext object, we must make the following changes:\nPass the executionObject as a parameter for the changeAddressLabels function. Get the formContext from the executionObject parameter and store this within a separate variable. Replace all references to the Xrm.Page object with the newly declared formContext variable. Because the script will error if the fields are not located on the form, we need to also add in some additional logic that will only attempt to rename each field if it exists on the form. Modify the above event handler on the Contact Main form to tick the Pass execution context as first parameter checkbox. Our updated code should, therefore, resemble the following:\nfunction changeAddressLabels(executionContext) { //Get formContext var formContext = executionContext.getFormContext(); //Check to see if the control is on the form and, if so, rename it accordingly. if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;)) formContext.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;))\tformContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); if (formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;)) formContext.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } And, likewise, our above Handler Properties window should look like this:\nSo, as we can see, the amount of effort involved is not too great and, even if you have to carry out these changes to multiple functions, I\u0026rsquo;m fairly confident that find and replace can be innovatively utilised to help you along. You can find out more about this impending change, along with some useful code examples, by reading the dedicated Microsoft Docs page on this very subject.\n","date":"2019-05-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/xrm-page-object-deprecation-check-your-dynamics-365-customer-engagement-jscript/","title":"Xrm.Page Object Deprecation: Check your Dynamics 365 Customer Engagement JScript"},{"content":"Version 2 of Azure Data Factory is the product that keeps getting better and better. With a whole range of features available currently which, arguably, places the product at a comparable feature parity to SQL Server Integration Services (SSIS), it is worth a look when you have a demanding data integration requirement. And, as we have seen on the blog previously, it is remarkably easy to incorporate any Data Factory development work as part of your existing DevOps processes. If you\u0026rsquo;re looking for an easy to set up, inexpensive means of fulfilling an Extract, Transform and Load (ETL) type scenario, I think you\u0026rsquo;d be hard pressed to find a similar service available today to meet your needs. Additionally, for those who are still wedded to any existing SSIS workloads, Data Factory has full support for hosting and running .dtsx packages within the cloud, with minimal alteration required.\nSomewhat surprisingly, amongst the list of over 80 + Linked Service connectors available for Data Factory, there are three specifically available for Dynamics 365 Customer Engagement (AKA Dynamics CRM):\nDynamics 365 Dynamics CRM Common Data Service for Apps When reviewing the entire list of connectors from the Microsoft Docs website and attempting to navigate to the relevant page for the above Linked Services, we are redirected to the same page for each connector. This fact would seem to indicate that there is no practical difference between each one, a suspicion that can be confirmed when reviewing the configurable properties for the Dynamics CRM and Dynamics 365 connectors within Data Factory:\nRather bizarrely as well, the Common Data Service for Apps connector has the same Deployment Type option, allowing you to indicate an on-premise environment. Perhaps I missed the memo on CDS V2 being released for on-premise too 🙂\nSo, at this stage, it does beg the question - does it matter which connector you use?\nTypically, I would advise choosing the connector that suits your particular environment, which would, therefore, dictate the following:\nIf you are running an on-premise version of Dynamics CRM 2016 or earlier, then go for the Dynamics CRM. If you are running an 8.2 or 9 version of Dynamics 365, either on-premise or online, then the Dynamics 365 connector is your best bet. For online-only instances running the latest version (9.1 at the time of writing), then Common Data Service for Apps makes the most logical sense as, unless you are already aware, this is basically what the underlying CRM database is now moving forward. However, in this particular scenario, I\u0026rsquo;m not too sure if it matters at all. Here\u0026rsquo;s why: the next stage, once your appropriate Linked Service is ready, is to define the corresponding dataset that you wish to access. For databases, this will typically be the underlying table/view that you wish to expose, and the Dynamics/Common Data Service for Apps connectors are no different in this regard. So, for example, to access the Account entity, you would need to create a corresponding dataset for this. For all three cases, this will load a dataset with the referenceName value of Dynamics365LinkedService, as indicated in the below screenshot and (shortened) code example:\n{ \u0026#34;name\u0026#34;: \u0026#34;D365_Account\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;linkedServiceName\u0026#34;: { \u0026#34;referenceName\u0026#34;: \u0026#34;Dynamics365LinkedService\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;LinkedServiceReference\u0026#34; }, \u0026#34;folder\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;D365\u0026#34; }, \u0026#34;type\u0026#34;: \u0026#34;DynamicsEntity\u0026#34;, \u0026#34;structure\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;address2_addresstypecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;merged\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Boolean\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;statecode\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Int32\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;emailaddress1\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; } ... ], \u0026#34;typeProperties\u0026#34;: { \u0026#34;entityName\u0026#34;: \u0026#34;account\u0026#34; } }, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/datasets\u0026#34; } When creating our Linked Service using any of the connectors listed above, our initial suspicions from the Microsoft Docs are confirmed - they are listed with a type value of Dynamics:\n{ \u0026#34;name\u0026#34;: \u0026#34;LinkedService\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.DataFactory/factories/linkedservices\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;Dynamics\u0026#34;, \u0026#34;typeProperties\u0026#34;: { \u0026#34;deploymentType\u0026#34;: \u0026#34;Online\u0026#34;, \u0026#34;serviceUri\u0026#34;: \u0026#34;https://mycrminstance.crm11.dynamics.com\u0026#34;, \u0026#34;authenticationType\u0026#34;: \u0026#34;Office365\u0026#34;, \u0026#34;username\u0026#34;: \u0026#34;jsmith@domain.com\u0026#34;, \u0026#34;encryptedCredential\u0026#34;: \u0026#34;mynotactuallyencryptedcredential\u0026#34; } } } So the answer is pretty obvious at this stage:\nWhere does this leave us? Given the fact that it is the same connector under the hood and, due to the whole Common Data Service for Apps element, I would argue it makes sense to stick to using this connector moving forward, regardless of what version of Dynamics 365/CRM you are using. Although I haven\u0026rsquo;t been in a position to test this (answers on a postcard if you have!), I would assume that this very same connector will work just fine and dandy for on-premise, internet-facing deployments of Dynamics CRM, regardless of version. I would also predict that the other listed connectors for Dynamics CRM and Dynamics 365 will be removed eventually; if not for simplicity purposes, then certainly due to any end-of-lifecycle support for these applications.\nIn a way, any initial confusion that these multiple connectors present when first being evaluated does ultimately detract away from an important point - that Azure Data Factory can be straightforwardly implemented alongside Dynamics 365/CRM with minimum effort involved. This capability opens up a whole host of different use cases, enabling CRM developers to embrace a fully cloud-hosted data integration tool that can meet the requirements other products on the market today, such as Scribe Online and Kingswaysoft, have traditionally been relied upon to fulfil.\n","date":"2019-05-12T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/dynamics-365-customer-engagement-connector-confusion-with-azure-data-factory/","title":"Dynamics 365 Customer Engagement Connector Confusion with Azure Data Factory"},{"content":"A key objective when it comes to implementing a Continuous Integration (CI) strategy is to ensure that there is no general degradation in the quality of code during a development cycle and in providing the earliest possible indication that a change in code has \u0026ldquo;broken the build\u0026rdquo;. Also, a lesser concern may be towards identifying the action or individual that has caused such a degradation. When described in this fashion, it may sound like some dispassionate witch-hunt exercise, but this couldn\u0026rsquo;t be further from the truth. As we are all ultimately human, we are bound to make mistakes at some point, and it is far better these are discovered during a development cycle as opposed to late on a Friday evening, shortly after a failed production system update 🙂\nFor larger teams, attempting to keep on top of all this can be a challenge, which is why Azure Pipeline automated builds within Azure DevOps is a natural solution to turn towards if you wish to manage some of these pressures. You can use the application to create a build definition for your projects within minutes and then define a variety of conditions that will invoke them - on a daily/hourly schedule, every time a change is made within a branch or even based on manual triggers. Automatic links to relevant work items in Azure Boards can also be created every time a build is triggered and, if a build fails, you can then automatically create a bug and assign it to the individual who requested it for further investigation. In short, you have a powerful arsenal of features at your disposal, all of which are available for free if you have fewer than six developers within your team.\nWhile a build pipeline that has a single task to build a Visual Studio solution file will, in most cases, find obvious code errors (e.g. a missing semi-colon), it is not of particular use in identifying obscure code errors. A good example of this is within an Azure resource template project, which has minimal capability from within Visual Studio or the mentioned task to identify template issues, even ones involving basic syntax errors. Fortunately, within Visual Studio, we have the opportunity to use the Validate option to contact Azure directly and check that our template doesn\u0026rsquo;t contain anything which could lead to a deployment failure:\nWe can see how effective this is by attempting to deploy a Microsoft.Web/serverfarm (i.e. an App Hosting plan) resource with an unsupported and very obvious key/value pair supplied which, despite building correctly within Visual Studio, causes an error on deployment:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: { }, \u0026#34;variables\u0026#34;: { }, \u0026#34;resources\u0026#34;: [ { \u0026#34;apiVersion\u0026#34;: \u0026#34;2015-08-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyWebAppHostingPlan\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/serverfarms\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[resourceGroup().location]\u0026#34;, \u0026#34;tags\u0026#34;: { \u0026#34;displayName\u0026#34;: \u0026#34;HostingPlan\u0026#34; }, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;F1\u0026#34;, \u0026#34;capacity\u0026#34;: 1, \u0026#34;thiswillerror\u0026#34;: \u0026#34;No really, it will.\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyWebAppHostingPlan\u0026#34; } } ], \u0026#34;outputs\u0026#34;: { } } Having this functionality available within Visual Studio is all well and good for individual developers, but if you have multiple individuals working on the same template, it becomes increasingly frustrating to carry this test out manually and frequently after any updated code has been checked in. Azure DevOps can come to the rescue here, because, as well as being able to automate your Azure template builds, you can also develop a pipeline that performs the same Validate action with your Azure template. We can even go a step further and deploy out Azure templates to a test environment as a precondition before the build succeeds. The steps involved are pretty straightforward:\nNavigate to your Azure DevOps project and select Pipelines -\u0026gt; Builds. This will open a list containing all build pipelines defined within your project. Select the + New button and then the New build pipeline option. We won\u0026rsquo;t be using YAML for this example, so select the Use the classic editor option at the bottom of the list: On the next screen, select the code repository where your Azure template project resides and then press Continue. On the Select a template screen, select the Empty Job button at the top of the list. You are then greeted with an empty Pipeline view, with the Tasks tab visible. Under Agent job 1, press the + icon and include the Azure Resource Group Deployment task for the pipeline by pressing the Add button: Populate the properties of the Task as follows: Display Name: Validate Azure Template Azure subscription: Select a valid Azure subscription which has a service connection with Azure DevOps. This may need configuring if not already in place. Action: Select the Create or update resource group option Resource group: The name of a valid Resource Group in Azure. This should be the same Resource Group where you intend to carry out your test deployment to. Template: Select the .json file containing your template. You can use the ellipses button next to the field name to browse through your linked repository to find this. Template parameters: Populate this with the repository file path of your parameter file, if being used. Deployment mode: Select the Validate option. Add on an additional Azure Resource Group Deployment task, configured with the same properties as the previous task, with the exception being the Deployment mode setting, which should be set to Incremental or Complete (be VERY careful with this second option, as it will delete any resource not existing in your template during the deployment if used). The name can also, optionally, be changed to something more descriptive. Add on a new Visual Studio Build task after the previously configured task. The only property that requires changing here is the Solution file path, which should be updated to the solution file containing your Azure template project. It\u0026rsquo;s also recommended, but not necessary, to rename the task accordingly. Finally, add on a Publish Build Artifact task. No further modifications are required for this task. Your build pipeline, if built out correctly, should resemble the below:\nWe now have a build pipeline in place that runs through the following steps, in order:\nValidate the supplied Azure template file, in the same manner to what we saw earlier within Visual Studio. This step will throw up obvious syntax issues, missing property values etc. Once validated, perform a test deployment of the resources to the Resource Group of your choosing. This is required as deployments can typically fail, due to misconfigured dependencies (e.g. only try and deploy a database to a server after the SQL server has deployed successfully, not before) or other issues relating to resource providers or your subscription. This step is also useful in being able to benchmark the expected deployment time for your template. With our template confirmed as valid and deploying out successfully, build our Azure template solution file so that we can then\u0026hellip; \u0026hellip;generate the appropriate build artifact, which can then be utilised by a release pipeline. The next step from here would be to navigate to the Triggers and Options tab, where you can configure the conditions under which the build will trigger and also other options relating to your new build pipeline.\nValidating your Azure templates before attempting any deployment is, in my view, a necessary step and one which may end up becoming a difficult task if carried out manually. With the options available to us within Azure DevOps, it\u0026rsquo;s easier than ever before to automate this process and ensure that you can scale your CI processes to suit teams of any size. If you require any assistance relating to this post or with Azure DevOps more generally, please feel free to leave a comment below or contact me directly.\n","date":"2019-05-05T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/validating-azure-resource-templates-within-an-azure-build-pipelines/","title":"Validating Azure Resource Templates within an Azure Build Pipeline"},{"content":"You can have some\u0026hellip;interesting times when experimenting with Azure Templates. Aimed primarily at those managing complex Azure estates or organisations which have a desire to incorporate their Azure development cycles as part of their DevOps processes, they are no doubt a powerful feature for developers to leverage. Even with my very recent and limited exposure to them, I am already at a place where I can confidently list the range of benefits they can deliver. The only thing I would caveat concerning their usage is that it is imperative that you put aside time to fully understand the precise behaviour that any templates you draft have when targeting your resource groups. This step is primarily to ensure that you do not accidentally remove configuration properties on your resource, overlook an important property/configuration or, ultimately, end up with a large, unexpected credit card charge 🙂\nA good example of why I would recommend this careful approach so much can be found when working with Steam Analytics Jobs within Azure, a service which is great for processing high-volume data streams from a variety of sources, such as Application Insights Continuous Export. When considering them for utilisation as part of an Azure template without explicitly setting any specific queries, Azure will automatically provision your resource using 3 Streaming Units, as opposed to 1. We can observe this behaviour when attempting to deploy the following template to the platform:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: {}, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.StreamAnalytics/streamingjobs\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyStreamAnalyticsJob\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Standard\u0026#34; } } } ] } To give you full control over the exact amount of streaming units deployed alongside the resource, we need to modify the template to include the Query needed when processing input data, as the properties exposed here allow you to define this exact value. Therefore, the above template required modifying as follows:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: {}, \u0026#34;variables\u0026#34;: {}, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.StreamAnalytics/streamingjobs\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;MyStreamAnalyticsJob\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;uksouth\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Standard\u0026#34; }, \u0026#34;transformation\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;MyQuery\u0026#34;, \u0026#34;properties\u0026#34;: { \u0026#34;streamingUnits\u0026#34;: 1, \u0026#34;query\u0026#34;: \u0026#34;SELECT * INTO [YourOutputAlias] FROM [YourInputAlias]\u0026#34; } } } } ] } So, if you are looking to automate and manage deployments relating to your Stream Analytics resources, it is recommended that all aspects of your jobs are defined out within your template, specifically:\nInputs Query Outputs This recommendation becomes mandatory if you are carrying out frequent template updates to your resource group, as performing a redeployment to an existing resource using the first template illustrated in this post will lead to loss of data; the query defined on the resource will be deleted, and the number of streaming units will be bumped back up to 3. It\u0026rsquo;s therefore vital, both from billing and also from a resource integrity perspective, that any query development is regularly fed back into your template and factored in as part of any updates. This additional step is a hell of a lot easier to do than you may think, as the portal supports the ability to generate resource templates at the drop of a hat and then moved across quickly to your template file/code repository.\nAs highlighted previously, Azure Templates are a great feature to get familiar with if you find yourself being weighed down by a particularly cumbersome Azure estate, but they do have their quirks and behaviours that can take some time to understand fully. Thankfully, with proper testing, it is relatively easy to identify and put right issues similar to the one described in this post.\n","date":"2019-04-28T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/how-to-define-streaming-units-for-microsoft-streamanalytics-streamingjobs-resource/","title":"How to Define Streaming Units for Microsoft.StreamAnalytics/streamingjobs Resource"},{"content":"In most scenarios, a software release will involve several steps that require consistent completion each time you are pushing out an update to your application. That is why tools such as Azure DevOps can have a significant benefit for organisations if implemented correctly, as they can not only give you the confidence to release updates more frequently but also massively reduce the risk of a failed software deployment due to human error. An adequately defined release pipeline within Azure DevOps allows you to set up each of the required steps as part of any application deployment, all of which are executed based on a specified order. With a vast array of different tasks available out of the box, ranging from PowerShell script tasks through to Amazon Web Services deployments, developers can have the confidence that an Azure DevOps release pipeline can fit in with whatever workloads involved.\nDespite the endless, monotonous repetition associated with software deployments, there may be occasions where you want to go a little bit freestyle and modify how tasks execute, based on dynamic values supplied at the time of release. Fortunately, there is a way you can do this within Azure DevOps, via the use of release variables and custom task conditions. The range of additional functionality this opens up is vast and, in today\u0026rsquo;s post, we\u0026rsquo;ll see how it is possible to get started using them with minimal effort.\nSetting up Release Variables A release variable is defined from the Variables tab when creating a new release pipeline:\nFrom here, you then specify the following settings:\nName: Ideally a descriptive name for the variable. Value: The default value for the variable. This property can be overridden at release time, as we\u0026rsquo;ll see shortly. Padlock: This tells AzureDevOps whether the Value provided is hidden from view once defined. This setting can be particularly useful if, for example, you are setting connection string or password values at release level. Scope: This defines in which part of your release pipeline the variable is accessible from - either from any stage (Release) or within a single one only (Stage 1, for example). Settable at release time: Again, fairly self-explanatory 🙂 Let\u0026rsquo;s you specify whether the release creator can override the variable value when creating a new release. Passing Release Variable Values to a Task Once defined, a variable is accessible within any scoped task within your pipeline. For example, if you set a variable called MySQLDBPassword, you can access its value by using the following syntax:\n$(MySQLDBPassword)\nSo, to pass this to an Azure SQL Database Deployment task as the database login password, we would provide the following value in this field:\nAlternatively, we can write out the value of the variable into a PowerShell task:\nWhen this task is then executed during a release, we observe the following behaviour in its logs:\nAs a consequence, variable values can be passed to almost any task property. Also, for sensitive values, they represent the most prudent route to go down to ensure that passwords and connection strings are handled securely.\nDefining Release Variables On A New Release We saw earlier the specific setting Settable at release time for release variables. If enabled, when creating a new release, you will be allowed to override its original, supplied value. So, by modifying the MySQLDBPassword variable from earlier to enable this property, we now get the following options exposed as part of creating a new release:\nAs observed above, the default value for this variable - p@ssw0rd123 - is automatically pulled through, and can be reviewed or even submitted without any further changes. For secret variables, the behaviour is slightly different, as expected, although we can still override the value if we wanted to; the only thing is that you won\u0026rsquo;t be able to see what you type, similar to a password field:\nImplementing Conditional Logic Using Release Variables As alluded to earlier, there may be occasions where you want certain tasks to be carried out or even skipped entirely, based on what value a variable holds. A good recent example that I was involved in illustrates where this may become desirable. We had several release pipelines that implemented a backup of an Azure SQL database before any updates were applied. This extra step was primarily to ensure that a pre-release version of the database was available in case a rollback is required. In situations where a re-release needs to be triggered, due to the pipeline failing because of a misconfiguration or other issue not related to the release artifacts themselves, having to go through the process of backing up the database again appeared to be unnecessary and a waste of time. We, therefore, set up a variable on the pipeline, configured as follows:\nThen, on the PowerShell task that performed the database backup, we navigate to the Control Options tab within the task and, firstly, select the Custom conditions option on the Run this task dropdown:\nSelecting this option makes a Custom condition field appear directly underneath, which allows you to input a wide array of different conditional logic using an easy-to-understand expression language. Through this, we can straightforwardly define a function that executes the task if the value of the BackupProdDB? variable equals true:\neq(variables[\u0026lsquo;BackupProdDB?\u0026rsquo;], true)\nNow, when overriding this parameter value at release-stage to false, the task is skipped entirely:\nIf a Microsoft hosted VS2017 Agent is used for the task, some additional detail is made available by hovering over the little i icon next to the task name. This tooltip will chiefly indicate how the expression evaluated itself during runtime; quite useful if you are debugging:\nThe example shown here only scratches the surface of what the expression language can ultimately facilitate. To take things further, you could look at implementing conditional logic that:\nPerforms comparisons against numbers Works with array values. Carries out joins or concatenations on multiple variable values. As this post has attempted to demonstrate, release variables open up an additional layer of functionality that can take your release pipelines to the next level. You can open up a whole range of functionality that allows those triggering releases to modify the steps involved dynamically and even to skip them entirely, without requiring any pipeline alterations to take place. Hopefully, today\u0026rsquo;s post has given you a flavour of how to get started using them. Let me know in the comments below if you identify a usage case from them yourself or if you have been able to come up with any ingenious expressions 🙂\n","date":"2019-04-21T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/working-with-variables-in-an-azure-devops-release-pipeline/","title":"Working with Variables in an Azure DevOps Release Pipeline"},{"content":"When your first starting with Microsoft Azure for straightforward projects or proof of concept designs, the Portal is your go-to destination for reviewing, managing, updating and creating resources. Even for larger scale deployments, you will typically find yourself within there most of the time; what may have changed, in this scenario, is the mechanism through which you deploy new resources or any changes to existing ones. Developers have a range of options at their disposal to programmatically carry out these types of activities:\nVia PowerShell, through the Az or the older AzureRM module. Through CLI, utilising syntax that traditional Linux developers would be more familiar with. By using predefined templates, built out using JSON, that can then be deployed using either the Portal, PowerShell or CLI. The last of these options can be of most benefit if you are already using Git source control for your projects (via GitHub, Azure DevOps, BitBucket etc.) and you have a desire to implement Continuous Integration (CI) and automated release management for your Azure templates. Both of these options enable you to validate your templates before deploying, to ensure no obvious errors occur, and to reduce the risk of human error as part of frequent software deployments. Making a move from managing your Azure resources from within the portal to Azure Templates is relatively straightforward, thanks to the options available to us to export our existing resources as templates. Notwithstanding this fact, there will still be times where you find yourself hitting a few stumbling blocks as you begin to fully understand the behaviour when deploying resources out in this manner.\nAn example better illustrates this issue. Let\u0026rsquo;s assume we have deployed out an App Service resource manually via the Azure Portal and, over time, we have assigned it the following Application settings:\nWe now decide it would be desirable to move towards utilising Azure Resource Manager templates and, as such, define the following JSON template for this resource:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;name\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;hostingPlanName\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;location\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;sku\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;serverFarmResourceGroup\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;subscriptionId\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; } }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;siteConfig\u0026#34;: { \u0026#34;appSettings\u0026#34;: [] }, \u0026#34;serverFarmId\u0026#34;: \u0026#34;[concat(\u0026#39;/subscriptions/\u0026#39;, parameters(\u0026#39;subscriptionId\u0026#39;),\u0026#39;/resourcegroups/\u0026#39;, parameters(\u0026#39;serverFarmResourceGroup\u0026#39;), \u0026#39;/providers/Microsoft.Web/serverfarms/\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; } }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/serverfarms\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-09-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;hostingPlanName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;sku\u0026#39;)]\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;hostingPlanName\u0026#39;)]\u0026#34;, \u0026#34;numberOfWorkers\u0026#34;: \u0026#34;1\u0026#34; } } ] } And, within Azure DevOps, we have the following Release Pipeline task created:\nNote in particular the selection of the Incremental option, recommended if you want to ensure that your deployment does not accidentally delete any resources not defined in the template.\nAfter using the template below as part of a release and upon navigating back to our Application settings for the App Service, we notice that all of them have vanished completely:\nWith the Incremental option specified above, you would be forgiven for thinking that the template deployment is \u0026ldquo;broken\u0026rdquo;, as it would appear to have done the complete opposite of what the setting implies it will do. The fault here lies with the JSON template itself, which has not been updated to include all of the Application settings needed for the App Service. During the deployment step, Azure will compare your template against any existing App Service resource and, if the Application settings are not there, they are permanently deleted. We can observe this behaviour in practice by adding our Application settings back on manually and by only specifying a single setting on our Microsoft.Web/sites resource within our JSON template:\n{ \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;siteConfig\u0026#34;: { \u0026#34;appSettings\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;MyAppSetting2\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;value456\u0026#34; } ] }, \u0026#34;serverFarmId\u0026#34;: \u0026#34;[concat(\u0026#39;/subscriptions/\u0026#39;, parameters(\u0026#39;subscriptionId\u0026#39;),\u0026#39;/resourcegroups/\u0026#39;, parameters(\u0026#39;serverFarmResourceGroup\u0026#39;), \u0026#39;/providers/Microsoft.Web/serverfarms/\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; } } Post-deployment, we can observe the following on the App Service:\nSo the answer, at this point, is pretty clear; update the entire JSON template to include all required Application Settings as part of your App Service:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#\u0026#34;, \u0026#34;contentVersion\u0026#34;: \u0026#34;1.0.0.0\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;name\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;hostingPlanName\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;location\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;sku\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;serverFarmResourceGroup\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;subscriptionId\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34; } }, \u0026#34;resources\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/sites\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-03-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;dependsOn\u0026#34;: [ \u0026#34;[resourceId(\u0026#39;Microsoft.Web/serverfarms\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; ], \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;name\u0026#39;)]\u0026#34;, \u0026#34;siteConfig\u0026#34;: { \u0026#34;appSettings\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;MyAppSetting1\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;value123\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;MyAppSetting2\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;value456\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;MyAppSetting3\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;value789\u0026#34; } ] }, \u0026#34;serverFarmId\u0026#34;: \u0026#34;[concat(\u0026#39;/subscriptions/\u0026#39;, parameters(\u0026#39;subscriptionId\u0026#39;),\u0026#39;/resourcegroups/\u0026#39;, parameters(\u0026#39;serverFarmResourceGroup\u0026#39;), \u0026#39;/providers/Microsoft.Web/serverfarms/\u0026#39;, parameters(\u0026#39;hostingPlanName\u0026#39;))]\u0026#34; } }, { \u0026#34;type\u0026#34;: \u0026#34;Microsoft.Web/serverfarms\u0026#34;, \u0026#34;apiVersion\u0026#34;: \u0026#34;2016-09-01\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;hostingPlanName\u0026#39;)]\u0026#34;, \u0026#34;location\u0026#34;: \u0026#34;[parameters(\u0026#39;location\u0026#39;)]\u0026#34;, \u0026#34;sku\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;sku\u0026#39;)]\u0026#34; }, \u0026#34;properties\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;[parameters(\u0026#39;hostingPlanName\u0026#39;)]\u0026#34;, \u0026#34;numberOfWorkers\u0026#34;: \u0026#34;1\u0026#34; } } ] } As alluded to already, the Incremental deployment option can be a little bit misleading in this context, as you may naturally assume that Azure would take no action to remove anything if this option is specified. The example outlined in this post clearly illustrates that this does not apply to resource properties, which can be subject to unintended alterations if your Azure Template does not specify them explicitly. Take care when migrating across to Azure RM templates to ensure that every single resource setting that you need is copied through and, also, carry out test deployments into dedicated testing/UAT environments to verify the exact behaviour that your templates have on your existing Azure resources.\n","date":"2019-04-14T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/app-service-application-settings-and-azure-resource-template-deployments/","title":"App Service Application Settings and Azure Resource Template Deployments"},{"content":"I\u0026rsquo;ve been using Azure DevOps (previously known as Visual Studio Team Services/Team Foundation Server) for a long time now and have blogged about it pretty frequently to date:\nI\u0026rsquo;ve shown how you can use the product to backup your Azure SQL databases during a deployment. We\u0026rsquo;ve seen how to get around fiddly issues with PowerShell script tasks during a Build or Release pipeline. When used alongside Azure Data Factory, it makes for an effective bedfellow and allows you to, for example, manage your releases entirely through Azure DevOps. And, finally, I\u0026rsquo;ve shown how to deal with permissions issues when working with both the online/on-premise version of the application. So, in case it\u0026rsquo;s not pretty clear already, I\u0026rsquo;ve got a lot of love and affection for the product. More recently, I\u0026rsquo;ve just finished some work migrating across numerous, existing Projects to a clean tenant and, as part of this, formalising how we record our Work Items within the application. The primary reasons behind this are so we are both a) tracking everything across the various projects our team is involved in and b) can start keeping the whole damn thing up to date as efficiently as possible 🙂 . A challenge to be sure, but one that I think will drive benefits in the long term as we adjust to working in a more genuinely Agile fashion.\nTo enable you to categorise your work items across multiple projects more straightforwardly, you can take advantage of the Areas feature within the application. Typically, you would also put in place the various iterations for your sprints alongside this, but both features can be operated in isolation if required. As part of first setting up your project, you would preferably define all of these from the start before populating out your Backlog. Life is not always ideal in this respect, or it could be that having got familiar with the application, you want to go the next step to report more effectively across your various work items. In this case, you may have a few issues getting things displaying how you want them to be when using the Boards feature.\nFor example, let\u0026rsquo;s assume we have the following Areas defined within our Azure DevOps project:\nWith a couple of different work items scattered across these various areas:\nAll looking good so far. But when we navigate to the Team Board for the project, we see that it is empty:\nLikewise, the default teams backlog is also mysteriously lacking in content:\nAt this point, you may be asking yourself - where the hell are my User Stories? They are still there and fully accessible within the application when running a query, but not in the place where (perhaps) you would like for yourself and members of your team to access them. It turns out that, in addition to defining your Areas within the settings of this project, there is one more setting that needs toggling as well within the Teams configuration sub-area. To do this:\nNavigate to the Project settings area and click on the Team configuration option underneath Boards Select the Areas tab and a screen similar to the one below should appear: Select the ellipses icon over the default Area and select the Include sub areas option highlighted below, pressing OK on the Are you sure you want to include sub-areas dialog box: Once confirmed, we can then navigate back to our Board and Backlog and see that our Work Items are now displaying correctly:\nMuch better!\nThe great benefit of cloud solutions, like Azure DevOps, is how quickly and easily you can get fully functioning business systems up and running, with an almost limitless amount of options at your disposal. In this scenario, you can very much feel like a child in a sweet shop, as you run around and try out the various features at your disposal. The solution described in this post is perhaps one area where you can steam ahead over-excitedly, but not fully appreciate the impact that implementing Areas may have for other users within your Azure DevOps project. Fortunately, the solution is relatively easy to resolve and, as a result, you can use Areas in tandem with your existing team Boards and Backlog with very little work or upheaval involved.\n","date":"2019-04-07T00:00:00Z","image":"/images/AzureDevOps-FI-e1557239961185.jpg","permalink":"/team-board-backlog-not-showing-sub-area-work-items-azure-devops/","title":"Team Board/Backlog Not Showing Sub-Area Work Items (Azure DevOps)"},{"content":"Functional consultants or administrators who have been using Dynamics CRM / Dynamics 365 Customer Engagement (D365CE) for any considerable length of time will likely have built up a solid collection of FetchXML queries, that are usable for a variety of different scenarios. Such privileged individuals are in the fortunate position of being able to leverage them in the following ways:\nWhen building out Reports using SQL Server Data Tools and the Dynamics 365 Reporting Authoring Extensions. Within tools like the XrmToolBox, when testing or running any example queries. As the underlying queries for any bespoke views created using the SDK. In other words, you have a range of useful queries that can potentially meet any needs within the application from a reporting standpoint. This is all well and good if you find yourself working solely within CRM/D365CE all the time, but when you start to bring in separate tools, such as Power BI, there can be some difficulty in migrating these across straightforwardly. Typically, you may find yourself staring down the barrel of a complicated and costly redevelopment exercise, where you have to invest a lot of time within Power Query to replicate your existing FetchXML queries as efficiently as possible; this puts potentially a lot of hard work and investment made into FetchXML query development down the drain almost immediately.\nFortunately, there is a way in which we can leverage our FetchXML queries using Power BI. I did a post on this very subject a few years ago, where I talked through an example from start to finish. The main limitations with this were, however, 1) the inability to return more than 5000 records at a time, given that paging was not correctly incorporated and 2) the fact that you had to manually define code for every query that you wished to utilise, which would take a lot of time to do and increase the risk of human error occurring.\nAs usual in these situations, the wonderful CRM/D365CE community has delivered a solution to address the first issue raised above. The Power Query (M) Builder tool is a handy plugin within the XrmToolBox that allows you to generate M query code snippets that you can use within Power BI Desktop. Most importantly, the tool incorporates a solution from Keith Mescha and the former Sonoma Partners Power BI Accelerator to get around the paging issue and allow you to return unlimited data from the application. You can find out more about the tool by checking out Ulrik \u0026ldquo;The CRM Chart Guy\u0026rdquo; Carlsson\u0026rsquo;s blog post dedicated to this very subject.\nThe tool is undoubtedly great, but if you have numerous FetchXML queries in a raw format that you wish to process within Power BI, it could take you some time to get these moved across into Power BI - particularly given that the tool does not currently support the ability to \u0026ldquo;bring your own\u0026rdquo; FetchXML queries. By using the example code provided by the tool, and carrying out some further work to address the second concern, it is possible to use the following M query function that will allow you to compartmentalise all of the above functionality in an easy to call Power Query function. Simply open a new blank query within Power Query and copy \u0026amp; paste the below into the window:\n/* Generate FetchXML Query Results M Function Required Parameters: crmURL = The URL of your CRM/D365CE instance e.g. https://mycrm.crm11.dynamics.com entityName = The OData entity name that you are querying. query = The FetchXML query to execute. This should NOT include the top level \u0026lt;fetch\u0026gt; node, but only all subsequent nodes with double quotes escaped e.g. \u0026lt;entity name=\u0026#34;\u0026#34;incident\u0026#34;\u0026#34;\u0026gt;\u0026lt;all-attributes /\u0026gt;\u0026lt;/entity\u0026gt; Credits: Big thanks to the Power Query Builder tool (https://crmchartguy.com/power-query-builder/) and Keith Mescha/Sonoma Partners Power BI Accelerator for figuring out the paging issue. Portions of the auto-generated code from the above tool is utilised within this function. */ let Func = (crmURL as text,entityName as text,query as text) =\u0026gt; let FullURL = Text.Combine({crmURL, \u0026#34;/api/data/v9.1/\u0026#34;, entityName},\u0026#34;\u0026#34;), QueryAll = (z as text, x as number) =\u0026gt; let Source = Json.Document(Web.Contents(FullURL, [ Headers= [ #\u0026#34;Prefer\u0026#34;=\u0026#34;odata.include-annotations=Microsoft.Dynamics.CRM.fetchxmlpagingcookie\u0026#34; ], Query= [ fetchXml=\u0026#34;\u0026lt;fetch distinct=\u0026#34;\u0026#34;True\u0026#34;\u0026#34; page=\u0026#34;\u0026#34;\u0026#34; \u0026amp; Text.From(x) \u0026amp; \u0026#34;\u0026#34;\u0026#34; paging-cookie=\u0026#34;\u0026#34;\u0026#34; \u0026amp; z \u0026amp; \u0026#34;\u0026#34;\u0026#34;\u0026gt;\u0026#34; \u0026amp; query \u0026amp; \u0026#34;\u0026lt;/fetch\u0026gt;\u0026#34; ] ] ) ), Paging = try Xml.Document(Source[#\u0026#34;@Microsoft.Dynamics.CRM.fetchxmlpagingcookie\u0026#34;]) otherwise null, Retrieve = if Paging \u0026lt;\u0026gt; null then List.Combine({Source[value],@QueryAll(Text.Replace(Text.Replace(Text.Replace(Uri.Parts(\u0026#34;http://a.b?d=\u0026#34; \u0026amp; Uri.Parts(\u0026#34;http://a.b?d=\u0026#34; \u0026amp; Paging{0}[Attributes]{1}[Value])[Query][d])[Query][d], \u0026#34;\u0026gt;\u0026#34;, \u0026#34;\u0026amp;gt;\u0026#34;), \u0026#34;\u0026lt;\u0026#34;, \u0026#34;\u0026amp;lt;\u0026#34;), \u0026#34;\u0026#34;\u0026#34;\u0026#34;, \u0026#34;\u0026amp;quot;\u0026#34;), x + 1)}) else Source[value] in Retrieve, GenerateEmptyTable = (query as text) =\u0026gt; let XML = Xml.Document(query), #\u0026#34;Expanded Value\u0026#34; = Table.ExpandTableColumn(XML, \u0026#34;Value\u0026#34;, {\u0026#34;Name\u0026#34;, \u0026#34;Namespace\u0026#34;, \u0026#34;Value\u0026#34;, \u0026#34;Attributes\u0026#34;}, {\u0026#34;Value.Name\u0026#34;, \u0026#34;Value.Namespace\u0026#34;, \u0026#34;Value.Value\u0026#34;, \u0026#34;Value.Attributes\u0026#34;}), #\u0026#34;Expanded Value.Value\u0026#34; = Table.ExpandTableColumn(#\u0026#34;Expanded Value\u0026#34;, \u0026#34;Value.Value\u0026#34;, {\u0026#34;Name\u0026#34;, \u0026#34;Namespace\u0026#34;, \u0026#34;Value\u0026#34;, \u0026#34;Attributes\u0026#34;}, {\u0026#34;Value.Value.Name\u0026#34;, \u0026#34;Value.Value.Namespace\u0026#34;, \u0026#34;Value.Value.Value\u0026#34;, \u0026#34;Value.Value.Attributes\u0026#34;}), #\u0026#34;Expanded Value.Attributes\u0026#34; = Table.ExpandTableColumn(#\u0026#34;Expanded Value.Value\u0026#34;, \u0026#34;Value.Attributes\u0026#34;, {\u0026#34;Name\u0026#34;, \u0026#34;Namespace\u0026#34;, \u0026#34;Value\u0026#34;}, {\u0026#34;Value.Attributes.Name\u0026#34;, \u0026#34;Value.Attributes.Namespace\u0026#34;, \u0026#34;Value.Attributes.Value\u0026#34;}), #\u0026#34;Filtered Rows\u0026#34; = Table.SelectRows(#\u0026#34;Expanded Value.Attributes\u0026#34;, each ([Value.Attributes.Name] = \u0026#34;name\u0026#34;)), #\u0026#34;Removed Columns\u0026#34; = Table.RemoveColumns(#\u0026#34;Filtered Rows\u0026#34;,{\u0026#34;Name\u0026#34;, \u0026#34;Namespace\u0026#34;, \u0026#34;Value.Name\u0026#34;, \u0026#34;Value.Namespace\u0026#34;, \u0026#34;Value.Value.Name\u0026#34;, \u0026#34;Value.Value.Namespace\u0026#34;, \u0026#34;Value.Value.Value\u0026#34;, \u0026#34;Value.Value.Attributes\u0026#34;, \u0026#34;Value.Attributes.Name\u0026#34;, \u0026#34;Value.Attributes.Namespace\u0026#34;, \u0026#34;Attributes\u0026#34;}), #\u0026#34;Transposed Table\u0026#34; = Table.Transpose(#\u0026#34;Removed Columns\u0026#34;), #\u0026#34;Promote Headers\u0026#34; = Table.PromoteHeaders(#\u0026#34;Transposed Table\u0026#34;, [PromoteAllScalars=true]), #\u0026#34;Added Custom\u0026#34; = Table.AddColumn(#\u0026#34;Promote Headers\u0026#34;, \u0026#34;@odata.etag\u0026#34;, each \u0026#34;\u0026#34;), #\u0026#34;Reordered Columns\u0026#34; = Table.ReorderColumns(#\u0026#34;Added Custom\u0026#34;, List.Sort(Table.ColumnNames(#\u0026#34;Added Custom\u0026#34;), Order.Ascending)) in #\u0026#34;Reordered Columns\u0026#34;, List = QueryAll(\u0026#34;\u0026#34;,1), Table = if List.IsEmpty(List) then GenerateEmptyTable(query) else #\u0026#34;D365CEData\u0026#34;, #\u0026#34;D365CEData\u0026#34; = Table.FromList(List, Splitter.SplitByNothing(), null, null, ExtraValues.Error), Expand = Table.ExpandRecordColumn( #\u0026#34;D365CEData\u0026#34;, \u0026#34;Column1\u0026#34;, Record.FieldNames(Table.Column(#\u0026#34;D365CEData\u0026#34;, \u0026#34;Column1\u0026#34;){0})), D365CE = Table.ReorderColumns(Expand, List.Sort(Table.ColumnNames(Expand), Order.Ascending)), Results = if List.IsEmpty(List) then Table else D365CE in Results in Func When saved, Power BI will then generate a function that should resemble the below screenshot:\nFrom here, you can then populate each of the required parameters as follows:\ncrmURL: This should be the full URL of your CRM/D365CE instance, e.g. https://myinstance.crm11.dynamics.com entityName: Here the OData compatible entity name that you want to query should be specified. Note that this is not the same as the Entity logical name and, in most cases, the OData entity names will be plural values. For example, the Case entity (with logical name incident) becomes incidents when querying via the Web API. Guido Preite has done a great article that discusses this issue in greater detail and also how this may impact you when querying any custom entity data. query: Within this field, you enter a portion of the FetchXML that you wish to use, specifically, the node and all subsequent nodes underneath this. And then you are good to go! As an example, the following FetchXML query:\n\u0026lt;entity name=\u0026#34;incident\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;title\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;ticketnumber\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;incidentid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;caseorigincode\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;casetypecode\u0026#34; /\u0026gt; \u0026lt;order descending=\u0026#34;false\u0026#34; attribute=\u0026#34;title\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;and\u0026#34;\u0026gt; \u0026lt;condition attribute=\u0026#34;createdon\u0026#34; operator=\u0026#34;this-year\u0026#34; /\u0026gt; \u0026lt;condition attribute=\u0026#34;casetypecode\u0026#34; operator=\u0026#34;in\u0026#34;\u0026gt; \u0026lt;value\u0026gt;2\u0026lt;/value\u0026gt; \u0026lt;value\u0026gt;1\u0026lt;/value\u0026gt; \u0026lt;/condition\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; Would return results similar to the below via the above function:\nOne limitation with this function, at present, is that I haven\u0026rsquo;t yet found a way to ensure formatted values return correctly, even when there are no results available. I\u0026rsquo;ll report back if I figure out a way to do this 🙂 A huge thanks to Keith, Ulrik and Sonoma Partners for kindly supplying the paging code snippet into the community and in helping me to build out the above function.\n","date":"2019-03-31T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-customer-engagement-web-api-power-bi-fetchxml-revisited/","title":"Dynamics 365 Customer Engagement Web API, Power BI \u0026 FetchXML Revisited"},{"content":"The Audio Conferencing add-on for Skype for Business / Microsoft Teams can oft be a requirement if you find yourself needing to schedule frequent online meetings that involve external participants. While most tech-savvy organisations these days will be fully equipped for taking calls via their laptops or other devices, a lot of businesses still do rely on traditional telephone headsets as their primary mechanism for making and receiving calls. And while most companies in the UK should already have made the jump across to use SIP / VoIP solutions, due to the impending curtain call on ISDN lines, it is unlikely they will be using a solution akin to Skype for Business / Microsoft Teams. It, therefore, becomes necessary to have a traditional audio conferencing solution in place, that allows attendees to dial in using a fixed telephone number. Microsoft\u0026rsquo;s Audio Conferencing solution meets these needs surprisingly well and comes equipped with a whole host of options that can be customised. For example, we\u0026rsquo;ve seen previously on the blog how it is possible to disable the requirement for entering a PIN for meetings, and you have additional options at your disposal, all of which are manageable via the Microsoft Teams Admin center or using PowerShell. Having your audio conferencing solution controllable from directly within the Office 365 portal can assist organisations in reducing the complexity of their IT estates.\nAs an add-on license, however, there are some limitations around exactly how you can get it provisioned onto your tenant. These limitations may not be too apparent if you are a Cloud Solutions Provider (CSP) with Microsoft, which gives you full control to provision Microsoft online services on behalf of your customer, at a far lower price compared with purchasing the services directly. Direct customers may also struggle to determine what they need to get up and running with Audio Conferencing, particularly if you a small business. The question then becomes: what exactly do I need in place to get started with Audio Conferencing?\nAs an add-on subscription, Audio Conferencing requires that one of the following base products exist on the Office 365 tenant in question first:\nSkype for Business (Plan 2) Office 365 Enterprise E3 Office 365 Enterprise E5 If you already have one of the above SKUs on your Office 365 tenant, then congratulations! You are ready to get started with using Audio Conferencing within your organisation. However, if you don\u0026rsquo;t or are, for example, a CSP partner provisioning licenses for a small organisation that uses Office 365 Essentials, but wants to have access to full Audio Conferencing functionality, then you will need to address this deficiency. The two hurdles that you will need to overcome are:\nThe tenant in question will require one of the SKUs listed above already provisioned on the tenant in question. Attempting to do this beforehand will cause errors, and the Audio Conferencing licenses will not provision correctly. Each user that requires Audio Conferencing must also be assigned (at least) a Skype for Business (Plan 2) license. At which point, after assigning both licenses, they will then receive an automated email from Microsoft containing their unique conference ID and PIN. To avoid the potentially prohibitive cost involved as part of both an E3 or E5 plan, the most cost-effective option would be to provision a Skype for Business Online (Plan 2) license (approx. £4 per user, per month) or an Office 365 Business Premium (approx £9 per user, per month) license. The second of these options include the Plan 2 license, alongside a whole range of other functionality, so would be the most natural option to turn to if, for example, you are currently using Office 365 Business or Essentials.\nHopefully, this post has made it clear how exactly you can go about provisioning Audio Conferencing functionality for an organisation, either from an end-user perspective or as a CSP provider offering services to your customer.\n","date":"2019-03-24T00:00:00Z","image":"/images/MSTeams-FI.jpg","permalink":"/cannot-provision-skype-for-business-microsoft-teams-audio-conferencing-license-via-csp/","title":"Cannot Provision Skype for Business / Microsoft Teams Audio Conferencing License via CSP"},{"content":"I\u0026rsquo;ve been doing some work with Power BI Online datasets this week. It\u0026rsquo;s the first time I\u0026rsquo;ve taken a look at them in any great detail, as I have traditionally preferred to create and deploy any data sources needed for my reports via the Desktop client. Datasets address the needs for users who do not necessarily have the necessary Power Query/DAX language and require a mechanism to quickly hook up to a pre-prepared dataset, thereby allowing them to start consuming any data rapidly. They are defined within an individual workspace and, once deployed, can then be shared out to other users to connect to. Datasets can help towards ensuring that any extensive piece of work developing a data model can be benefitted by as many users as possible and can reduce the need for people having to create data sources themselves if organised correctly. To find out more about how to create a dataset, you can take a look through my recent series all around Microsoft Exam 70-778, where I cover this topic in further detail.\nDatasets can remove some headaches for more substantial Power BI deployments, but you should be aware of they can\u0026rsquo;t do. The most noteworthy limitations include:\nYou can only connect a Power BI report to a single Power BI dataset at any time. You are unable to bring in additional data sources via Power Query and, likewise, if you have already defined several data sources within your report, you cannot then bring a Power BI dataset into your model. It is not possible to make any modifications to a Power BI dataset from within Power BI Desktop, either via Power Query manipulation or by adding DAX custom columns; you can, however, define new Measures using DAX. Where this can start to be problematic is when you are attempting to surface data generated by the Applications Insights Continuous Export facility via a Stream Analytics job. The great thing about Stream Analytics is that you can define multiple input/output locations for data that it processes, one of which includes a Power BI dataset. A convenient feature and one that can potentially sidestep the need to export any information out to a SQL-based destination for further processing. However, one area where the feature handicaps itself is in the fact that you cannot output multiple tables within the same Power BI dataset. If you attempt to do this, you get the following error message:\nWhen you take into account the limitations mentioned above concerning a strict 1:1 mapping between dataset/report, problems can arise when it comes to defining your queries on the Stream Analytics side of things. You either have to export out all of the base information in a raw format or instead develop highly customised queries on the Stream Analytics side. The latter solution may successfully meet a specific business requirement, but risks putting you in a position where your BI solution contains many different reports, some of which may only include a meagre amount of visualisations. I\u0026rsquo;m not sure about you, but my preference would be more towards the first option, as opposed to building out an overtly complex BI solution; but the limitations that Power BI datasets introduce for this particular scenario does present some challenge in sticking to this mantra.\nFor example, assume we have the following Stream Analytics query that outputs Request information from Application Insights into a Power BI dataset:\nSELECT requestflat.ArrayValue.id AS RequestID, requestflat.ArrayValue.name AS RequestType, requestflat.ArrayValue.responseCode AS ResponseCode, requestflat.ArrayValue.success AS RequestSuccessful, requestflat.ArrayValue.url AS URL, requestflat.ArrayValue.urlData.base AS BaseURL, requestflat.ArrayValue.urlData.host AS URLHost, requestflat.ArrayValue.durationMetric.value AS Duration, r.internal.data.id AS ID, r.internal.data.documentVersion AS DocumentVersion, r.context.data.eventTime AS EventTime, r.context.data.isSynthetic AS IsSynthetic, r.context.data.syntheticSource AS SyntheticSource, r.context.data.samplingRate AS SamplingRate, r.context.device.type AS DeviceType, r.context.device.roleName AS SlotName, r.context.device.roleinstance AS RoleInstance, r.context.session.isFirst AS IsFirstSession, r.context.operation.id AS OperationID, r.context.operation.parentID AS OperationParentID, r.context.operation.name AS OperationName, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 0), \u0026#39;Platform\u0026#39;) AS Platform, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 1), \u0026#39;Browser\u0026#39;) AS Browser, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 3), \u0026#39;UserAgent\u0026#39;) AS UserAgent, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 4), \u0026#39;Browser_Version\u0026#39;) AS BrowserVersion, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 5), \u0026#39;Referrer\u0026#39;) AS ReferralURL, r.EventProcessedUtcTime AS RequestEventProcessedUtcTime, r.context.[user].anonId AS AnonymousID, r.context.location.continent AS ClientLocation, r.context.location.country AS ClientCountry INTO [PowerBIRequestsOutput] FROM [Requests] AS r CROSS APPLY GetElements(r.[request]) AS requestflat Having the data in this format provides us with the most flexibility when tailoring things on the Power BI side of things. But what if you wanted to perform some ranking on the data, based on a distinct category value - for example, ranking each of the Browser values in popularity order, based on each visitor to the website? While it is certainly possible to do this via a Stream Analytics query, you would end up having to group the data, thereby reducing the wider usage that the dataset could accommodate. Fortunately, thanks to the fact that we can create DAX Measures, it is possible to overcome this to generate a ranking per category and then display the most popular browser as part of a Card visualisation. We first need to create the following Measure within Power BI Desktop:\nBrowser Ranking = RANKX( ALLSELECTED(PowerBIRequestsOutput[browser]), CALCULATE( DISTINCTCOUNT( PowerBIRequestsOutput[anonymousid] ) ),,,Dense ) We can confirm this works as expected by dropping a quick table visualisation in and verifying that each Browser is being ranked correctly, based on the count of the anonymousid field:\nSo far, so good 🙂 And this is potentially a visualisation that has a good usage case solely on its own; but, with an additional DAX Measure, we can return the highest ranked value above, IE, via the following Measure:\nMost Used Browser = TOPN(1, FILTER(VALUES(PowerBIRequestsOutput[browser]), [Browser Ranking] = 1)) Now, the thing to mention here is that the above does not explicitly handle ties in any sophisticated way; therefore, there is no guarantee over which value will display in the event of a draw for the top ranking. Nevertheless, it is encouraging to know that DAX provides us with the types of capabilities we need when we find ourselves unable to do the kind of manipulation we would want to, either via a SQL query or Power Query manipulation.\n","date":"2019-03-17T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/ranking-categories-within-power-bi-datasets-dax/","title":"Ranking Categories within Power BI Datasets (DAX)"},{"content":"Software deployments and updates are always a painful event for me. This feeling hasn\u0026rsquo;t subsided over time, even though pretty much every development project I am involved with these days utilises Azure DevOps Release Pipelines to automate this whole process for our team. The key thing to always stress around automation is that it does not mean that all of your software deployments suddenly become entirely successful, just because you have removed the human error aspect from the equation. In most cases, all you have done is reduce the number of times that you will have to stick your nose in to figure out what\u0026rsquo;s gone wrong 🙂\nDatabase upgrades, which are done typically via a Data-tier Application Package (DACPAC) deployment, can be the most nerve-racking of all. Careful consideration needs putting towards the types of settings you define as part of your publish profile XML, as otherwise, you may find either a) specific database changes are blocked entirely, due to dependency issues or because intended data loss will occur or b) the types of changes you make could result in unintended data loss. This last one is a particularly salient concern and one which can be understood most fully by implementing staging or pre-production environments for your business systems. Despite some of the thought that requires factoring in before you can look to take advantage of DACPAC deployments, they do represent the natural option of choice in managing your database upgrades more formally, mainly when there is need to manage deployments into Azure SQL databases. This state of play is mostly thanks to the dedicated task that handles this all for us within Azure DevOps:\nWhat this feature doesn\u0026rsquo;t make available to us are any appropriate steps we may need to take to generate a snapshot of the database before the deployment begins, a phase which represents both an equally desirable and necessary business requirement for software deployments. Now, I should point out that Azure SQL includes many built-in options around recovery and point in time restore options. These options are pretty extensive and enable you, depending on the database size tier you have opted for, to restore your database to any single time point over a 30-day point. The question that therefore arises from this is fairly obvious - why go to the extra bother (and cost) to create a separate database backup? Consider the following:\nThe recovery time for a point-in-time restore can vary greatly, depending on several factors relating to your database size, current pricing tier and any transactions that may be running on the database itself. In situations where a short release window constraints you and your release must satisfy a strict success/fail condition, having to go through the restore process after a database upgrade could lead to your application from being down longer then is mandated within your organisation. Having a previous version of the database already available there means you can very quickly update your application connection strings to ensure the system returns to operational use if required. Having a replica copy of the database available directly after an upgrade can be incredibly useful if you need to reference data within the old version of the database post-upgrade. For example, a column may have been removed from one table and added to another, with the need to copy across all of this data accordingly. Although a point-in-time restore can be done to expose this information out, having a backup of the old version of the database available straight after the upgrade can help in expediting this work. Although Microsoft promise and provide an SLA with point-in-time restore, sometimes its always best to err on the side of caution. 🙂 By taking a snapshot of the database yourself, you have full control over its hosting location and the peace of mind in knowing that the database is instantly accessible in case of an issue further down the line. If any of the above conditions apply, then you can look to generate a copy of your database before any DACPAC deployment takes place via the use of an Azure PowerShell script task. The example script below shows how to achieve this requirement, which is designed to mirror a specific business process; namely, that a readily accessible database backup will generate before any upgrade is taken place and to create a copy of this within the same Azure SQL Server instance, but with the current date value appended onto it. When a new deployment triggers in future, the script will delete the previously backed up database:\n#Define parameters for the Azure SQL Server name, resource group and target database $servername = \u0026#39;mysqlservername\u0026#39; $rg = \u0026#39;myresourcegroup\u0026#39; $db = \u0026#39;mydb\u0026#39; #Get any previous backed up databases and remove these from the SQL Server instance $sqldbs = Get-AzureRmSqlDatabase -ResourceGroupName $rg -ServerName $servername | select DatabaseName | Where-Object {$_.DatabaseName -like $db + \u0026#39;_Backup*\u0026#39;} if (($sqldbs | Measure-Object).Count) { Remove-AzureRmSqlDatabase -ResourceGroup $rg -ServerName $servername -DatabaseName $sqldbs[0].DatabaseName } #Get the current date and convert it into a string, with format DD_MM_YYYY $date = Get-Date $date = $date.ToShortDateString() $date = $date -replace \u0026#34;/\u0026#34;, \u0026#34;_\u0026#34; #Create the name of the new database $copydbname = $db + \u0026#39;_Backup_\u0026#39; + $date #Actually create the copy of the database New-AzureRmSqlDatabaseCopy -CopyDatabaseName $copydbname -DatabaseName $db -ResourceGroupName $rg -ServerName $servername -CopyServerName $servername Simply add this on as a pipeline task before any database deployment task, connect up to your Azure subscription and away you go!\nBackups are an unchanging aspect of any piece of significant IT delivery work and one which cloud service providers, such as Microsoft, have proactively tried to implement as part of their Platform-as-a-Service (PaaS) product lines. Azure SQL is not any different in this regard and, you could argue that the point-in-time restore options listed above provide sufficient assurance in the event of a software deployment failure or a disaster-recovery scenario, therefore meaning that no extra steps are necessary to protect yourself. Consider your particular needs carefully when looking to implement a solution like the one described in this post as, although it does afford you the ability to recover quickly from any failed software deployment, it does introduce additional complexity into your deployment procedures, overall technical architecture and - perhaps most importantly - cost.\n","date":"2019-03-10T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/backup-azure-sql-database-within-azure-devops-release-pipeline/","title":"Backup Azure SQL Database within Azure DevOps Release Pipeline"},{"content":"After a few months of working with Microsoft Azure, you start to become familiar with the concepts behind the Resource Manager model and also how it is possible to move resources around, depending on business requirements. Whereas in the past, you would typically need to open a support request to Microsoft to complete this action, the vast majority of these operations can now be given to Azure administrators to finish instead. As such, we have the capability to:\nMove a resource into a different resource group Move a resource/resource group into a separate subscription, thereby altering its billing arrangements. Migrate an entire subscription to a new Azure Active Directory tenant, which alters both its ownership and billing arrangements in one fell swoop. There are a few things regarding a resource/resource group that remain unchangeable post-creation. For example, its name and location (UK South, North Europe etc.) are static properties that cannot be altered for the entirety of a resources lifespan, and the only way to modify this is to delete and re-create it from scratch - action steps that have severe implications for a production workload. There are also a few scenarios where it will not be possible to move a resource to another subscription. The list of resources this affects is dwindling as the months go by, but there are still many popular services that are affected by this - such as Azure Data Factory, Logic Apps and Service Fabric. Partners who have a desire to move their customers away from Microsoft direct to Cloud Solutions Provider (CSP) billing need to take note in particular of these limitations when scoping any migration exercise, as it could be that an entire project becomes derailed if an un-supported resource is in the mix.\nSometimes, even if you think you will be OK when migrating a resource, you will occasionally hit an issue during the validation stage of a move. I came across a good example of this when attempting to move an App Service Plan and its corresponding App Service to a new subscription location, with the following JSON error message being generated (modified so that it is readable):\n{ \u0026#34;code\u0026#34;: \u0026#34;ResourceMoveProviderValidationFailed\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Resource move validation failed. Please see details. Diagnostic information: timestamp \u0026#39;20190221T124038Z\u0026#39;, subscription id \u0026#39;4e846969-3196-476f-a088-6e393bb5ce98\u0026#39;, tracking id \u0026#39;6cf8ddf7-8d15-48bc-8b34-f7729bd44f0a\u0026#39;, request correlation id \u0026#39;68c44a05-eeba-4d96-887c-56a228ab69a3\u0026#39;.\u0026#34;, \u0026#34;details\u0026#34;: [ { \u0026#34;target\u0026#34;: \u0026#34;Microsoft.Web/serverFarms\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;{\\\u0026#34;Code\\\u0026#34;:\\\u0026#34;BadRequest\\\u0026#34;,\\\u0026#34;Message\\\u0026#34;:\\\u0026#34;Cannot move resources because some site(s) are hosted by other resource group(s) but located in resource group \u0026#39;myoriginalrg\u0026#39;. The list of sites and corresponding hosting resource groups: \u0026#39;myappserviceplan:myoriginalrg,mywebapp1:myoriginalrg\u0026#39;. This may be a result of prior move operations. Move the site(s) back to respective hosting resource groups and try again.\u0026#34;\\ ... } } ] } In this particular example, the App Service Plan and App Service had been the subject of a previous move operation from their original resource group (in this example, myoriginalrg) into the current resource group. The location of the original and current resource group when this move took place was in the same subscription, so the transfer completed without issue. Now, because I was looking to move the resources to a new subscription, the error message above appeared. What\u0026rsquo;s worse, the myoriginalrg resource group had been deleted a long time ago, meaning there it wasn\u0026rsquo;t immediately clear whether the option suggested in the error message was even possible. Fortunately, this was not the case, and the following workaround steps can be used to get your App Service Plan/App Services moved to your desired location.\nCreate a new resource group in the same subscription where the App Service Plan/App Service exists, with the same name listed in the error message (in this case, myoriginalrg). Move the resources into the myoriginalrg resource group. Verify that this completes successfully. Re-attempt the resource move operation to your preferred new subscription location. The procedure should complete successfully. My main worry when I first saw this error message is that the resources in question were tied permanently to a non-existent resource group and that the listed workaround steps were not going to work. Fortunately, my glass half full outlook proved me to be categorically wrong, and the workaround solved the issue entirely. I can\u0026rsquo;t really understand or explain why an App Service Plan / App Service creates such a binding to a resource group (and, to a lesser extent, a subscription), which therefore causes an issue like this to appear; fortunately, as we have seen, there is a way of getting things working as intended without having to involve Microsoft support 🙂\n","date":"2019-03-03T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/cannot-move-resources-because-some-sites-are-hosted-by-other-resource-groups-error-message-azure-web-app/","title":"\"Cannot move resources because some site(s) are hosted by other resource group(s)...\" Error Message (Azure Web App)"},{"content":"As epitomised by the recent rebranding exercise Microsoft conducted with the product, I very much see Visual Studio Team Services/Team Foundation Server Azure DevOps becoming an increasingly dominant tool for development teams the world over. I came into the product after discovering its benefits from Ben Walker at a CRMUG meeting a few years back and, since then, the product has proved to be a real boon in helping me to:\nManage code across various .NET and Dynamics CRM/Dynamics 365 Customer Engagement projects. Formalise the code review process, via the usage of pull request approvals. Log and track backlogs across various project work. Automate the project build process and deployment into development environments, allowing us to identify broken builds quickly. Fully automate release cycles, reducing the risk of human error and potentially laborious late night working. I really would urge any company carrying out some form of bespoke development, mainly with the Microsoft technology stack, to give Azure DevOps a look. Contrary to some of the previous biases that would creep in as part of Microsoft\u0026rsquo;s products, Azure DevOps is very much built with openness at its core, allowing you to straightforwardly leverage sections or the whole breadth of its functionality to suit your particular purpose.\nWhen you start working with Azure DevOps more in-depth, it becomes obvious that having a general awareness of PowerShell will hold you in good stead. Although targeted towards more Microsoft-focused deployments, PowerShell becomes your go-to tool when working with Azure in particular. Recently, I had a PowerShell Script task that previously executed with no issue whatsoever on a Windows 10 self-hosted agent. The first thing that the script did was to define the Execution Policy, using the following command:\nSet-ExecutionPolicy Unrestricted (For the uninitiated, the above is an atypical requirement for all PowerShell scripts to ensure that you do not hit any permission issues during script execution.)\nAs stated already, this was all working fine and dandy, until one day, when my deployments suddenly started failing with the following error message:\nExecuting the script manually on the machine in question, in an elevated PowerShell window, confirmed that it was not an issue on Azure DevOps side:\nWell, this is one of those occasions where the actual error message tells you everything you need to know to get things working again. Whether something had changed on the machine or not as part of an update, it was now necessary to modify the PowerShell command to include the -Scope option outlined above. Therefore, the script should resemble the following:\nSet-ExecutionPolicy Unrestricted -Scope CurrentUser We can then confirm that the script no longer errors when being executed on the machine in question:\nAnd, most importantly, the release pipeline in question now completes without any error 🙂\nPerhaps its just me, but often in these types of situations, you can overlook the completely obvious and find yourself going down the rabbit hole, chasing a non-existant or grandiose solution to a particular IT problem. As this example clearly demonstrates, sometimes just reading the error message you are presented with properly can reduce a lot of wasted effort and allow you to resolve a problem faster than expected.\n","date":"2019-02-24T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/access-to-the-registry-key-error-in-powershell-script-task-azure-devops/","title":"\"Access to the registry key...\" Error in PowerShell Script Task (Azure DevOps)"},{"content":"For the past 13 weeks on the blog, I have delivered a series of posts concerning Microsoft Exam 70-778, specifically focused towards providing a set of detailed revision notes that cover the broad array of Power BI features assessed as part of the exam. To round things off, today\u0026rsquo;s blog will bridge together everything I have discussed thus far in the series; with the hope being that this post can be a single reference point for those who have not been following the series to date.\nMicrosoft Exam 70-778 Overview The exam, with its full title Analyzing and Visualizing Data with Microsoft Power BI, is targeted towards Business Intelligence (BI) and data professionals who are looking to validate their skills in working with Power BI. The exam is a necessary component, alongside Exam 70-779: Analyzing and Visualizing Data with Microsoft Excel, in attaining the Microsoft Certified Solutions Associate (MCSA) certification in BI Reporting. Successful candidates can then (optionally) pass an additional \u0026ldquo;elective\u0026rdquo; exam to gain the Microsoft Certified Solutions Expert (MCSE) certification in Data Management and Analytics.\nSkills Measured in the Exam The skills measured are outlined below, alongside links to the relevant posts from the series and the list of essential points to remember:\nConsuming and Transforming Data By Using Power BI Desktop Connect to data sources. Skills Measured May include: Connect to databases, files, folders; import from Excel; connect to SQL Azure, Big Data, SQL Server Analysis Services (SSAS)\nRevision Notes Exam 70-778 Revision Notes: Importing from Data Sources\nKey Takeaways Power BI supports a broad range of database systems, flat file, folder, application and custom data sources. While it is impossible to memorise each data source, you should at least broadly familiarise yourself with the different types at our disposal. A crucial decision for many data sources relates to the choice of either Importing a data source in its entirety or in taking advantage of DirectQuery functionality instead (if available). Both routes have their own defined set of benefits and disadvantages. DirectQuery is worth consideration if there is a need to keep data regularly refreshed and you have no requirement to work with multiple data sources as part of your solution. Live Connection is a specific data connectivity option available for SQL Server Analysis Services. It behaves similarly to DirectQuery. It is possible to import an existing Excel BI solution into Power BI with minimal effort, alongside the ability to import standard worksheet data in the same manner as other flat file types. Perform transformations Skills Measured May include: Design and implement basic and advanced transformations; apply business rules; change data format to support visualization\nRevision Notes Exam 70-778 Revision Notes: Performing Data Transformations\nKey Takeaways The Power Query M formula language is used to perform transformations to data once loaded into Power BI. Although it is possible to do this via code, Power BI allows us to define all of our required data changes from within the interface, without the need to write a single line of code. Each data source connected to represents itself as a Query within Power BI. There are many options at your disposal when working with Queries, such as renaming, merging, duplication and the ability to disable or reference as part of other Queries. There are wide-range of column transformations that can be applied, which are too numerous to mention. The Transform tab provides the best means of seeing what is available, with options ranging from formatting through to grouping and pivoting/unpivoting. New columns are addable via the Add Column tab. You can choose to base new columns on calculations, conditional logic, other column values or as a defined list of ascending numbers, which may be useful for indexing purposes. It is possible to merge or append queries together to suit your specific requirements. Merging involves the horizontal combination of Queries, whereas appending represents a vertical combination. Parameters can be used to help optimise any complex filtering requirements. Where possible, Power Query will attempt to use the most optimal query for your data source, based on the transformation steps you define. This action is known as Query Folding and, in most cases, SQL-derived data sources will support this option by default. Cleanse data Skills Measured May include: Manage incomplete data; meet data quality requirements\nRevision Notes Exam 70-778 Revision Notes: Cleansing Data\nKey Takeaways Data can be filtered directly within Power Query, using Excel-like functionality to assist you in only returning the most relevant data in your queries. The data type of each field plays a particularly important part of this, as only specific filter options will be at your disposal if, for example, you are working with numeric data. From a data quality perspective, you typically will need to handle column values that contain one of two possible value types: Errors: This will usually occur as a result of a calculated column field not working correctly. The best solution will always be to address any issues with your calculated column, such as by using a conditional statement to return a default value. Blanks/NULLs: A common symptom when working with SQL derived data sources, your real problems with blank values start to appear when you attempt to implement DAX custom columns/Measures outside of the Power Query Editor. It is, therefore, recommended that these are dealt with via a Replace action, depending on your fields data types. For example, a number field with blank/NULL values should be replaced with 0. The Remove Rows option(s) can act as a quick way of getting rid of any Error or Blank/NULL rows and can also be utilised further to remove duplicates or a range of rows. In most cases, you will have similar options available to you with Keep Rows instead. There are a variety of formatting options available to us when working with text/string data types. These range from fixing capitalisation issues in data, through to removing whitespace/non-printable character sets and even the ability to prepend/append a new value. Modeling and Visualizing Data Create and optimize data models. Skills Measured May include: Manage relationships; optimize models for reporting; manually type in data; use Power Query\nRevision Notes Exam 70-778 Revision Notes: Create and Optimise Data Models\nKey Takeaways Relationships form the cornerstone of ensuring the long-term viability and scalability of a large data model. Assuming you are working with well-built out, existing data sources, Power BI will automatically detect and create Relationships for you. In situations where more granular control is required, these Relationships can be specified manually if needed. It is worth keeping in mind the following important features of Relationships: They support one-to-one (1:N), one-to-many (1:N) and many-to-one (N:1) cardinality, with many-to-many (N:N) currently in preview. Filter directions can be specified either one way or bi-directionally. Only one relationship can be active on a table at any given time. It is possible to sort columns using more highly tailored custom logic via the Sort By Column feature. The most common requirement for this generally involves the sorting of Month Names in date order but can be extended to cover other scenarios if required. To implement, you should ensure that your data has a numbered column to indicate the preferred sort order. Moving outside of the Power Query Editor presents us with more flexibility when it comes to formatting data to suit particular styling or locale requirements. While the majority of this functionality provides date/time and currency formatting options, for the most part, it is also possible to categorise data based on Location, the type of URL it is or on whether or not it represents a Barcode value; these options can assist Power BI when rendering certain types of visualizations. There may be ad-hoc requirements to add manually defined data into Power BI - for example, a list of values that need linking to a Slicer control. The Enter Data button is the \u0026ldquo;no-code\u0026rdquo; route to achieving this and supports the ability to copy \u0026amp; paste data from external sources. For more advanced scenarios, you also have at your disposal a range of M code functionality to create Lists, Records and Tables, which can be extended further as required. Create calculated columns, calculated tables, and measures Skills Measured May include: Create DAX formulas for calculated columns, calculated tables, and measures; Use What If parameters\nRevision Notes Exam 70-778 Revision Notes: Using DAX for Calculated Columns\nKey Takeaways DAX is the primary formula language when working with datasets outside of Power Query. It includes, to date, more than 200 different types of functions that can assist in all sorts of data modelling. An important concept to grasp within DAX is context and, specifically, row context (formulas that calculate a result for each row in a dataset) and filter context (formulas that automatically apply any filtering carried out at report level). The sheer amount of DAX functions available makes it impossible to master and remember all of them, particularly when it comes to the exam. Your learning should, therefore, focus on learning the general syntax of DAX and the general types of functions available (aggregation, date/time etc.) There are three principal means of utilising DAX with Power BI: As Measures: These typically present a scalar value of some description, often an aggregation or a result of a complex formula. Using them in association with a Card visualization type is recommended, but this is not a strict requirement. As Calculated Columns: Similar to the options available within Power Query, Calculated Columns provide a dynamic and versatile means of adding new columns onto your datasets. Compared with the options available within Power Query and the complexity of the M language, DAX Calculated Columns might represent a more straightforward means of adding custom columns onto your datasets. As Calculated Tables: A powerful feature, mainly when used in conjunction with Calculated Columns, you have the ability here to create entirely new datasets within the model. These will typically derive from any existing datasets you have brought in from Power Query, but you also have functionality here to create Date tables, sequence numbers and manually defined datasets as well. What-if Parameters provide of means of testing DAX formulas, as well as allowing report users to perform predictive adjustments that can affect multiple visualizations on a report. Measure performance by using KPIs, gauges and cards. Skills Measured May include: calculate the actual; calculate the target; calculate actual to target; configure values for gauges; use the format settings to manually set values\nRevision Notes Exam 70-778 Revision Notes: Utilising KPIs with Gauge Visualisations\nKey Takeaways There are two principle visualization types available within Power BI to help track actual-to-target progress - KPIs and Gauges. KPIs provide a more visually unique means of a binary success/fail determination when tracking towards a target. It is also possible to use KPI\u0026rsquo;s to track variance over time via the Trend axis. The Indicator will typically be the result of some form of aggregation or Measure. Gauges provide a less visually distinctive, but non-binary, mechanism of viewing progress towards a target. Gauges support more potential field well values when compared with KPIs, nearly all of which are optional in some way. You can also manually define some of these values, for situations where your data model does not contain the required information. All visualizations within Power BI are modifiable from a display or formatting perspective. The same basic options will generally be supported - such as changing a font type or background colour - with more specific configuration properties available per unique visualization type. For example, a KPI visualization can be customised to hide the background Trend Axis entirely. All of these options are designed to give developers greater control over the look and feel of their reports and to mirror them as closely as possible to any potential branding requirement. When building out a solution designed to monitor progress within Power BI, the steps involved will typically be more in-depth than merely creating a new visualization. In most cases, there will be a requirement to bring together a lot of the other skills that have been discussed previously within this series - such as creating DAX formulas, modifying data within Power Query or bringing together different data sources into a single model. It is essential, therefore, not to underestimate the amount of time and effort involved in creating a practical solution that takes advantage of KPIs or Gauges. Create hierarchies Skills Measured May include: Create date hierarchies; create hierarchies based on business needs; add columns to tables to support desired hierarchy\nRevision Notes Exam 70-778 Revision Notes: Creating Hierarchies\nKey Takeaways Hierarchies within Power BI provide a means of logically categorising data into an order of preference or precedence, providing greater flexibility to Power BI report users when they interact with visualizations. Date Hierarchies are created and managed automatically by Power BI for each Date or Date/Time field defined within your model. These automatically create fields that contain the Year, Quarter, Month \u0026amp; Day values from the respective date fields. These fields can then be utilised as part of a Table visualization or within a DAX formula. Date Hierarchies can also be completely disabled if required. Custom (or User-Defined) Hierarchies need to be created manually and provide additional customisation options when compared to Date Hierarchies, such as the number of fields they contain, the order and its name. A Custom Hierarchy will typically make use of one of several Parent/Child DAX functions, such as PATH or PATHITEM. Including a hierarchy as part of a chart visualization, such as a Pie chart or Donut chart, opens up other drill-down capabilities around your data. Indicated by the additional arrow icons included at the top of the visualization, they provide the means for users to interrogate data points that interest them the most straightforwardly. Create and format interactive visualizations. Skills Measured May include: Select a visualization type; configure page layout and formatting; configure interactions between visual; configure duplicate pages; handle categories that have no data; configure default summarization and data category of columns; position, align, and sort visuals; enable and integrate R visuals; format measures; Use bookmarks and themes for reports\nRevision Notes Exam 70-778 Revision Notes: Create and Format Interactive Visualizations\nKey Takeaways Power BI delivers, out of the box, a range of different visualizations that cater towards most (if not all) reporting requirements. Should you find yourself in need of additional visualizations, then Microsoft AppSource is your go-to destination for finding visualizations developed by others. If you have experience working with either Node.js or R, then these can be used to build bespoke visualizations also. When first developing a report, you should be able to match a requirement for a specific visualization type, to ensure that you are delivering a solution that is both meaningful and useful. From an exam perspective, this becomes a more critical consideration, and you should be prepared to suggest the most optimal visualization to use when given a specific scenario. After adding visualization\u0026rsquo;s to your report, you have additional options available to customise them further. For example, you can specify a preferred sorting order for your data, override any summarizations used and move/align your visual on the report page. By default, visualizations in Power BI are designed to change automatically, based on how users interact with the report. All of these options are controllable via the Edit interactions button, allowing you to specify your preferred cross-filtering and cross-highlighting conditions. There is a range of report page customisation options available to developers. It is possible to resize a page to any possible height/width, allowing you to optimise your report for specific devices. Also, you can modify the colour of a page (or its wallpaper) or add an image instead. Pages can also be renamed, reordered or duplicated. Measures can be formatted in the same way as calculated columns, meaning you can specify a data type or, for numerics, modify the number of decimal places. Bookmarks allow developers to set up \u0026ldquo;checkpoints\u0026rdquo; within a report, based on how a report page has been filtered. These can then be used to automatically navigate the user through a report, applying these filtering steps automatically. This feature can help transform your report into an interactive story. Visualizations will automatically inherit their various colour properties from the currently selected report theme. Although these can be modified granularly, the fastest and most consistent way of making these changes en masse is to change the Theme. Power BI includes some Themes out of the box, but you also have the option of building your own using a custom JSON file; this can then be imported into different reports, providing a portable means of enforcing a particular branding requirement. Manage custom reporting solutions Skills Measured May include: Configure and access Microsoft Power BI Embedded; enable developers to create and edit reports through custom applications; enable developers to embed reports in applications; use the Power BI API to push data into a Power BI dataset; enable developers to create custom visuals Revision Notes Exam 70-778 Revision Notes: Managing Custom Reporting Solutions\nKey Takeaways Power BI Embedded is an Azure hosted offering that allows you add Power BI Report content into bespoke applications. This deployment option can be incredibly useful if you wish to make available your Power BI solution to users outside of your organisation or if you have an existing, bespoke application system that can benefit from utilising Power BI content. An Azure subscription is required to begin working with Power BI Embedded and you are billed based on node size, not individual user licenses. All Power BI content requires publishing to the online service before its contents become available for Power BI Embedded to access. Report developers will, therefore, need granting a Power BI Professional license to carry out these activities. The Power BI API grants access to developers to perform automation or administrative actions programmatically against the Power BI Online service. Utilising a REST API, developers can determine the optimal programming language of choice to interact with the API, allowing them to streamline the deployment of Reports or Dashboards to the Power BI service or leverage additional functionality when utilising Power BI Embedded. The API can also cater to specific data load requirements, although more complex needs in this area would require addressing via alternate means (SSIS, Azure Data Factory etc.) Developers can add their own bespoke visualizations to a Power BI Report by either developing them using Node.js or using the R language. The first of these options facilitate a more streamlined deployment mechanism and allows developers to add their visualizations to AppSource, whereas the second option may be more useful for complex visualization types with an analytical or statistical function. Configure Dashboards, Reports and Apps in the Power BI Service Access on-premises data Skills Measured May include: Connect to a data source by using a data gateway;publish reports to the Power BI service from Power BI Desktop; edit Power BI Service reports by using Power BI desktop\nRevision Notes Exam 70-778 Revision Notes: Report Publishing, On-Premise Gateway \u0026amp; Creating Dashboards\nKey Takeaways The Power BI On-Premise Gateway provides a streamlined route to working with non-cloud data sources within Power BI, Microsoft Flow and PowerApps. As a lightweight and easy-to-configure client application, it supports a wide variety of data sources, making them accessible as if they were in the cloud. Once set up, corresponding Data Sources are then made available for configuration and for leveraging as part of any Power BI Dataset. Reports can be published into Power BI Online, meaning that they become accessible online and to a broader group of users, without requiring access to Power BI Desktop. Reports need deploying into a Workspace, which can be created manually or derived from an Office 365 Group. Each Report contains a corresponding Dataset, where all queries defined within Power BI Desktop exist. Reports that already exist on Power BI Online can be updated by just publishing a new version of the Report from Power BI Desktop. It is also possible to modify Reports from directly within the browser and by downloading a copy of the .pbix Report file as well, which can then be altered and re-published. Configure a dashboard Skills Measured May include: Add text and images; filter dashboards; dashboard settings; customize the URL and title; enable natural language queries\nRevision Notes Exam 70-778 Revision Notes: Report Publishing, On-Premise Gateway \u0026amp; Creating Dashboards\nKey Takeaways Dashboards provide a means of grouping together various content as tiles, designed for at-a-glance analysis and optimal end-user experience. The list of content that can be pinned to a Dashboard includes: Visualizations Web content Images Text boxes Videos Custom streaming data Pinned content can be re-arranged on Dashboard via drag and drop functionality. It is also possible to resize tiles to any given height/width. Within the settings of a Dashboard, it is possible to enable/disable features such as natural language queries (Q\u0026amp;A\u0026rsquo;s) and Notes. Some features of a Dashboard are only available if you have a Power BI Professional subscription, such as sharing and email subscriptions. Publish and embed reports Skills Measured May include: Publish to web; publish to Microsoft SharePoint; publish reports to a Power BI Report Server\nRevision Notes Exam 70-778 Revision Notes: Publish and Embed Reports\nKey Takeaways The Publish to web option allows for non-licensed, external users to view a Power BI Report in its entirety. A URL and IFrame embed code can be generated for this at any time within the portal and then dropped into virtually any website. Although you will lose some functionality when deploying a Report out in this manner, you can expect that users will be able to perform most types of interactions with visualizations, Report pages and other components, as if they were accessing the Report through Power BI Online. In some cases, you may be unable to use the Publish to web option if your Report uses certain kinds of features, such as R Visuals or row-level security. You must also take into account any privacy or data protection concerns, as Reports deployed out in this manner will be publically accessible; where this is an issue, the Embed option is available as a secure alternative. There are three steps involved if you wish to add a Report to SharePoint. First, you must generate the unique SharePoint embed URL within Power BI. Secondly, you then need to add on the dedicated control for this feature on your target SharePoint page and configure the relevant display options. Finally, you then need to ensure that all SharePoint users have been granted access to the Report, either at a Workspace level (recommended option) or by having the Report shared with them. By implication, in this scenario, all SharePoint users would have to have at least a Power BI Professional license to take full advantage of this functionality. Publishing a Report to Power BI Report Server is mostly the same as if you were to do the same with the online version of the product. Instead of selecting a Workspace to add the Report to you, specify the name of the Report Server folder where the Report will reside. From a development standpoint, the dedicated Power BI Desktop for Power BI Report Server must be used and may differ in functionality from the \u0026ldquo;normal\u0026rdquo; version of the tool. There is also no option to edit a report from within Power BI Report Server like you can through the online version. Configure security for dashboards, reports and apps. Skills Measured May include: Create a security group by using the Admin Portal; configure access to dashboards and app workspaces; configure the export and sharing setting of the tenant; configure Row-Level Security\nRevision Notes Exam 70-778 Revision Notes: Securing Power BI Dashboards, Reports and Apps\nKey Takeaways Workspaces act as a container for the various components that form a Power BI Reporting solution. Within a Workspace, you will find all of the Dashboards, Reports, Workbooks and Datasets that developers have published content to. Each User has a Workspace created for them in Power BI when they first access the service. Additional Workspaces can be added through Office 365 Groups or by installing a Power BI App from AppSource. Dashboards and Reports created within your a Users Workspace are shareable to other Users, provided that your account has a Power BI Professional license assigned to it. To help manage permissions to Dashboards/Reports in a more efficient manner, Administrators can create Security Groups on the same Office 365 Tenant where Power BI Online resides. These can contain multiple groups of Users, allowing administrators to minimise the amount of effort involved in managing Dashboard/Report access. Most crucially, this will also enable Users that do not have an Exchange Online mailbox to access Dashboards/Reports when they are shared out in this manner. Administrators have a whole host of options available to them within the Tenant settings area of the Admin Portal. These include, but are not limited to: Export and Sharing Settings Enable/Disable Content Sharing Enable/Disable Publish To Web Enable/Disable Export Reports as PowerPoint Presentations Enable/Disable Print Dashboards and Reports Content Pack and App Settings Integration Settings Custom Visuals Settings R Visuals Settings Audit and Usage Settings Dashboard Settings Developer Settings All of these settings can be enabled for a specific security group, the entire organisation (excepting specific security groups) or allowed for particular security groups, excluding all others in the organisation. Row-Level Security (RLS) allows report developers to restrict data, based on Roles. Row-level DAX evaluation formulas are used to achieve this, which filters the data that is returned, depending on a TRUE/FALSE logic test. To utilise the feature, you must define both the Roles and DAX formulas for each query within your data model. Then, after deploying your Report to Power BI Online, you then assign Users or Security Groups to the Role(s) created within Power BI Desktop. It is possible to view the effect of a Role at any time, within Power BI Desktop or Online, via the View As Role functionality. With the wide-array of DAX formulas available, including specific ones that return the details for the current user accessing a Report, it is possible to define very granular filtering within a Power BI report, to suit particular security or access models. Configure apps and apps workspaces. Skills Measured May include: Create and configure an app workspace; publish an app; update a published app; package dashboards and reports as apps\nRevision Notes Exam 70-778 Revision Notes: Working with Apps and App Workspaces\nKey Takeaways Workspaces act as a container for the various components that form a Power BI Reporting solution. Within a Workspace, you will find all of the Dashboards, Reports, Workbooks and Datasets that developers have published content to. Each User has a Workspace created for them in Power BI when they first access the service. It is also possible to create additional Workspaces, either through the Power BI Online interface or by creating an Office 365 Group. A new experience for creating Workspaces is currently in preview which, once released, would negate the need for each Workspace to have an associated Office 365 Group. When creating a Workspace, you can define various settings such as the type of access each user has (read-only or ability to modify its content), its members and whether it requires assignment to a Power BI Premium node. It is not possible to change the access type for a Workspace after creation, but you can freely change its name or modify its membership at any time. The contents of a Workspace can be published as an App, enabling you to expose your solution to a broader audience within or outside your organisation. Once published, users navigate to the Power BI AppSource store for their tenant, which lists all Apps available for installation. Once installed, they will then become visible from within the Apps area of the application. You can update content within an App at any time by republishing its corresponding Workspace. It is also possible to define individual properties within an App, such as its description, access rights and landing page. To install and use Apps, the user in question must have a Power BI Professional license. Additional Preperation Resources The official Microsoft exam reference book is a helpful learning aid for this exam, particularly given that it includes numerous exercises that you can work through to familiarise yourself with different Power BI functionality. There is also an online course available on the edX website which, despite not covering the whole exam syllabus, does provide a useful visual aid and includes a lot of the features you are expected to know for the exam. Finally, nothing beats actually working with the product itself and trying out the various features yourself. Power BI Desktop is a free download and, with access to one of the sample databases provided by Microsoft, you can very quickly provision an environment on your own home computer to enable you to experience everything that Power BI has to offer.\nExams are always a nightmarish experience, both when preparing for them and when you are sat there in the test centre. I hope that this post, and this whole series, proves to be useful in helping with your exam preparation and getting you ready to pass the exam with flying colours 🙂\n","date":"2019-02-17T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-summary-post/","title":"Exam 70-778 Revision Notes: Summary Post"},{"content":"Welcome to the final post in my blog series concerning Microsoft Exam 70-778, where I hope to provide a revision tool for those planning to take the exam or a learning aid for those looking to increase their Power BI knowledge. Last weeks post discussed a range of topics about access and security when using Power BI as your Business Intelligence solution. We now move onto the final exam focus area - Configure apps and apps workspaces - which tests candidates against the following subjects:\nCreate and configure an app workspace; publish an app; update a published app; package dashboards and reports as apps\nTo follow through any of the examples described in this post, you should make sure that you have access to a Power BI Professional subscription. Free sixty day trials are available for this service if required.\nCreating Workspaces We provided a brief overview of the salient concepts behind a Workspace in last weeks post, chiefly from the perspective of the personal Workspaces that each user has access to from within Power BI Online. As mentioned as part of this discussion, additional workspaces can be created on top of this, to allow for a more logical grouping of Power BI content to become accessible within an organisation. There are a few ways to create a Workspace as part of the \u0026ldquo;current\u0026rdquo; experience (more on this later). Workspaces currently take advantage of Office 365 Groups in the background, so creating one of these within the Office 365 Admin Portal will automatically cause a Workspace with the same name to appear within Power BI Online. It is also possible to create all of this from within the Power BI Online interface, by using the Create app workspace button:\nYou have a couple of options that can be specified when creating a Workspace, as illustrated in the screenshot below:\nTo control how content within a Workspace can be interacted with, you can define whether its membership is Public - Anyone can see what\u0026rsquo;s inside or Private - Only approved members can see what\u0026rsquo;s inside. By \u0026ldquo;anyone\u0026rdquo;, this refers to any user account that has been provisioned on your Office 365/Azure Active Directory Tenant. You should ensure that the most appropriate option is chosen when the Workspace is created, as it cannot be changed once defined. Permissions for all members of a Workspace are definable as either write access (Members can edit Power BI content) or read-only access (Members can only view Power BI content) to anything created within a Workspace. If you are fortunate enough to have access to dedicated Power BI Premium capacity, then you can also choose to assign your Workspace to this. Doing this will ensure that the Workspace can benefit from the increased CPU/Memory capacity within your available node(s). Once created, a Workspace becomes accessible from within the Power BI Online interface and also as a choosable location to publish your .pbix files to from within Power BI Desktop:\nIt is also possible to add the following components, either from other Workspaces in Power BI or via upload, from within Power BI Online:\nAs alluded to earlier, the \u0026ldquo;current\u0026rdquo; Workspace experience is being overhauled currently, chiefly meaning that there will no longer be a need to create an Office 365 Group in the background. You can find out more about this new experience by reading the Microsoft Docs article on the very subject which, although still in preview, will eventually be something you need to be aware of for the exam.\nPublishing Workspaces as an App To distribute Power BI content to a broader audience, either internally or externally, you can publish all material that has been deployed out to a Workspace as an App. Taking this extra step can help you to simplify any ongoing management of a Workspace as, instead of having to grant access to individual users or Security Groups, you can alternatively create a self-service experience for users across the business, enabling them to install and interact with the Power BI content that is most relevant for their role.\nYou can create an App from within a Workspace by selecting the Publish app option at the top right of the window:\nFrom here, you must define several different settings across the three tabs listed:\nDetails - Allows you to provide a description for the app and also set the Background color that displays to end users, based on a predefined list (i.e. no option to specify a unique hex colour value): Content - Let\u0026rsquo;s you determine the default landing page presented to users when they first load the App. In most cases, it is best to choose a Dashboard, although it is possible to use a Report or have no landing page at all for the App (in this scenario, the user will see a list of all the content included within the App instead). It is also possible to review all of the content that will be published out within the App on this tab: Access - Here, you can define who within the organisation can install the App - all users within your Office 365/Azure Active Directory tenant (Entire organization) or a specified list of Users or Security Groups (Specific individuals or group). There is no option to define more granular permissions (e.g. modify the contents within an App), which makes sense given the intended business requirement Apps serve. In most cases, you will probably leave the Entire organization option selected: With all settings defined, you can then click the Finish button to get your App published out. Depending on the size of your datasets and Reports, this may take a while, as the below confirmation step advises:\nAfter successfully publishing your App, you are presented with a URL link to it and also have the option to navigate straight to it so that you can verify its contents:\nApps, once installed, can also be accessed from the Apps button on the left-hand pane within Power BI Online:\nFrom an end-users perspective, they would then interact with the newly published App by either:\nAccessing and installing the app using the URL generated in the previous example. Navigating to the My organization tab within AppSource and pressing the Get it now button under the corresponding App: Apps can be updated with new or modified content at any time, without the need for users to install them again. All you would need to do is update the corresponding Workspace and then select the Update app button at the top left of the Workspace window:\nThe process of re-publishing the App is then identical to when you first publish it out.\nISV developers who have put together a range of Power BI content that integrates with their solution can also publish Apps to the public AppSource store, allowing any Power BI user the world over to install and utilise the content contained within. The process involved here involves several hoops and is not something you need to worry about for the exam.\nKey Takeaways Workspaces act as a container for the various components that form a Power BI Reporting solution. Within a Workspace, you will find all of the Dashboards, Reports, Workbooks and Datasets that developers have published content to. Each User has a Workspace created for them in Power BI when they first access the service. It is also possible to create additional Workspaces, either through the Power BI Online interface or by creating an Office 365 Group. A new experience for creating Workspaces is currently in preview which, once released, would negate the need for each Workspace to have an associated Office 365 Group. When creating a Workspace, you can define various settings such as the type of access each user has (read-only or ability to modify its content), its members and whether it requires assignment to a Power BI Premium node. It is not possible to change the access type for a Workspace after creation, but you can freely change its name or modify its membership at any time. The contents of a Workspace can be published as an App, enabling you to expose your solution to a broader audience within or outside your organisation. Once published, users navigate to the Power BI AppSource store for their tenant, which lists all Apps available for installation. Once installed, they will then become visible from within the Apps area of the application. You can update content within an App at any time by republishing its corresponding Workspace. It is also possible to define individual properties within an App, such as its description, access rights and landing page. To install and use Apps, the user in question must have a Power BI Professional license. We\u0026rsquo;ve come to the end of the series (at last!) and covered a LOT of content relating to Power BI. I\u0026rsquo;ll be doing a roundup post next week, where I\u0026rsquo;ll try and group together all of the posts into the single location, alongside some general exam preperation tips. If you\u0026rsquo;ve been following the series, thank you for reading and I hope that it has proved to be a useful learning tool 🙂\n","date":"2019-02-10T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-working-with-apps-and-app-workspaces/","title":"Exam 70-778 Revision Notes: Working with Apps and App Workspaces"},{"content":"Welcome to the twelfth and penultimate post in my blog series concerning Microsoft Exam 70-778, where I hope to provide a revision tool for those planning to take the exam or a learning aid for those looking to increase their Power BI knowledge. We\u0026rsquo;re on the home stretch now and, after reviewing last week the various options available to publish Power BI Reports both online and on-premise, we now take a deep dive into some vital security concepts as part of the Configure security for dashboards, reports and apps theme, which covers the following skill areas:\nCreate a security group by using the Admin Portal; configure access to dashboards and app workspaces; configure the export and sharing setting of the tenant; configure Row-Level Security\nBefore exploring these topics further, however, it is essential to outline a concept that this series has continually skated around - Power BI Workspaces.\nWorkspace Overview We\u0026rsquo;ve seen so far in the series how it is possible to deploy Power BI Desktop Reports into Power BI Online. As part of this process, you must define a Workspace where your Reports and Datasets will appear after publishing. There are three types of Workspaces:\nMy Workspace - Each user, by default, will have a personal Workspace, which cannot be deleted or changed. Office 365 Group Workspace App Workspace Workspaces are, for the most part, a logical grouping of all the components that you need to start building out a BI solution. They are worked with from within Power BI Online only (meaning that they do not exist as part of Power BI Report Server) and can be interacted with from the left-hand pane within Power BI Online:\nAs indicated above, each user\u0026rsquo;s Workspace can contain:\nDashboards - These are created within the Power BI service, as we saw a fortnight ago. Reports - These are built out in Power BI Desktop or uploaded directly into Power BI Online from a .pbix file. Workbooks - These will show a list of Excel workbooks that have been uploaded into Power BI, allowing you to leverage an existing solution built out using Excel PivotTables/PivotCharts almost immediately through Power BI. For this exam, it is not something you need to worry about necessarily, but be aware this topic does crop up within Exam 70-779. Datasets - Contains a list of all data models uploaded/created for Power BI Reports or Workbooks. It is possible to share out Dashboards and Reports to other Users/Security Groups, and we will see how this can be done with the example later on in this post. One consideration to bear in mind is that, when sharing Reports, this does not share out any Dashboards that reference it. Content shared to you will become visible within the Shared with me tab on Power BI Online:\nNext week\u0026rsquo;s post will go into further detail on how to create and manage Workspaces, and how to handle access to App Workspaces.\nOffice 365 Security Groups Those who have experience administrating on-premise Active Directory (AD) domains will have full familiarity with the concept of Security Groups - logical containers of Users that can then be assigned file/folder privileges, group policy objects and access to other principals on the domain. Given that Power BI uses Azure Active Directory (AAD) as its identity provider, the same kind of concepts come into play when you start to determine how to manage access to Power BI Dashboards, Reports and Datasets to specific groups of Users. Office 365 Security Groups are virtually identical to their on-premise AD equivalent; the primary difference being is that Administrators must create them from within the Office Microsoft 365 Admin Center. It is also possible to add them through Microsoft Azure as well, so your choice here really comes down to preference. In either case, you must ensure that you have the relevant administrator privileges on your AAD tenant to create and manage them. Once created and defined with your required list of Users, they then become available as a shareable object within Power BI Online.\nIn the example towards the end of this post, we will walk through how to create a Security Group and how this can then be used to share out a Dashboard.\nManaging Export and Sharing Settings With the introduction of GDPR last year, data privacy concerns remain a paramount concern for organisations. These concerns can often come into conflict with new functionality that technology solutions can offer us such as, for example, the ability to export a Power BI Report as a PowerPoint presentation. To help with these considerations and in line with Microsoft\u0026rsquo;s overall commitments from a GDPR standpoint, Power BI Online provides several options that allow you to granularly define various actions that Users can and cannot do, such as using custom visuals in reports, accessing audit/usage information and the ability to use preview features, such as dataflows. The list of settings most relevant to data sharing can be found under the Export and sharing settings section on the Admin Portal -\u0026gt; Tenant settings area of Power BI Online:\nEach of the listed features can be enabled or disabled globally on the Office 365 tenant. Alternatively, by utilising Security Groups, you can grant or curtail specific functionality to a group/department within an organisation. You have no option to specify individual User access as part of this, so it becomes a requirement to have your required Security Groups defined within Office 365 before you can start working with this feature.\nRow-Level Security Granting global allow/deny privileges at a Report level may not be sufficient for specific business requirements. It could be, for example, that a single Sales report is in place for both junior and senior sales professionals, and there is a need to only present data that is most relevant to their role. Or, for example, there is a need to show data that has the most relevance for an individual\u0026rsquo;s particular geographic region. In these situations and, to avoid a scenario where you would have to define separate queries to segregate this data appropriately, Row-Level Security (or RLS) becomes a significant asset. It allows you to set Roles linked to DAX expressions, which tell Power BI which data to show to a particular group of Users.\nThere are two steps involved as part of implementing RLS. First, you must create a Role that defines the list of privileges for one or multiple Users. This step can be achieved by navigating to the Modeling tab within Power BI Desktop and selecting the Manage Roles button, which will open the appropriate dialog window:\nNext, you must define a DAX Expression for each table that requires filtering as part of the Role. These can be set up for as many Tables as you like, but the critical thing to remember is that the DAX Expression must conform to a TRUE/FALSE equality check. The example below - whether it will either be TRUE or FALSE that the TotalSales value on a row will be greater than or equal to 500 - meets this requirement:\nWith the Role defined, it is then possible to test it locally from within Power BI Desktop by using the View as Roles button to select your corresponding Role:\nWith everything built out and working with Power BI Desktop, the second step is to publish your Report to Power BI Online and then assign the Role to Users or a Security Group by navigating to the Dataset in question:\nIt is also possible to use the Test as role feature within Power BI Online, which behaves identically to its Desktop client equivalent:\nTo help leverage additional functionality out of RLS, Microsoft provides the following two DAX functions:\nUSERNAME() - Returns the domain name of the User accessing the Report. For Desktop Reports, this will typically be in the format \\; when viewing the Report online, the value rendered instead will either be the user\u0026rsquo;s email address or onmicrosoft.com account name. USERPRINCIPALNAME() - Returns the User Principal Name (UPN) of the User accessing the Report. The UPN will almost always be the user\u0026rsquo;s email address or, in some cases for Power BI Online, their onmicrosoft.com user account name. By using these functions in tandem with IF DAX constructs, you have the additional capability to restrict access to specific data, based on user account names. All in all, RLS is a powerful feature to have at your disposal but, as highlighted in last week\u0026rsquo;s post, you should be aware of its limitations. RLS is incompatible when there is a need to Publish to web a report and the feature is also not available if you are querying a SQL Server Analysis Services data source via a live connection.\nExample: Sharing a Power BI Dashboard with a Security Group The steps that follow will show how to create an Office 365 Security Group and then share out a Dashboard to it from within Power BI. To complete the steps outlined, you should ensure that you are assigned either the Global administrator or User management administrator role in Office 365 and that your user account has a Power BI Professional license:\nNavigate to the Admin Center within Office 365, expand the Groups tab on the left-hand pane and select Groups: The main window should refresh, displaying the list of Groups setup on the AAD tenant. Select the Add a group button: Define the settings as indicated in the below screenshot, making sure that the Type selected is Security, and press Add: You will then receive confirmation that the Security Group was added successfully to your tenant: With the main Groups window, select the new Security Group and then click the Edit button on the right-hand details pane: Then, select the Add members button to add in the required list of Users: Press Save to commit any changes: Navigate back to Power BI Online and to the Dashboard that needs sharing. Select the Share button at the top right of the screen: Within the Share dashboard pane, begin typing in the name of the Security Group created in the previous steps. Power BI will automatically detect and auto-complete the name of the group for you. Before pressing the Share button, you can also include a custom message to recipients that will be sent via an email and also toggle whether they will also be able to Share the dashboard themselves. A URL link generates at this point as well, allowing you to copy/paste this into an email, IM message etc.: Once the Dashboard is Shared, you can then navigate to the Access tab to review the list of Users/Security Groups that have access to your Dashboard. It is also possible to modify their access levels or remove access entirely by clicking on the ellipses button next to each Name: Key Takeaways Workspaces act as a container for the various components that form a Power BI Reporting solution. Within a Workspace, you will find all of the Dashboards, Reports, Workbooks and Datasets that developers have published content to. Each User has a Workspace created for them in Power BI when they first access the service. Additional Workspaces can be added through Office 365 Groups or by installing a Power BI App from AppSource. Dashboards and Reports created within your a Users Workspace are shareable to other Users, provided that your account has a Power BI Professional license assigned to it. To help manage permissions to Dashboards/Reports in a more efficient manner, Administrators can create Security Groups on the same Office 365 Tenant where Power BI Online resides. These can contain multiple groups of Users, allowing administrators to minimise the amount of effort involved in managing Dashboard/Report access. Most crucially, this will also enable Users that do not have an Exchange Online mailbox to access Dashboards/Reports when they are shared out in this manner. Administrators have a whole host of options available to them within the Tenant settings area of the Admin Portal. These include, but are not limited to: Export and Sharing Settings Enable/Disable Content Sharing Enable/Disable Publish To Web Enable/Disable Export Reports as PowerPoint Presentations Enable/Disable Print Dashboards and Reports Content Pack and App Settings Integration Settings Custom Visuals Settings R Visuals Settings Audit and Usage Settings Dashboard Settings Developer Settings All of these settings can be enabled for a specific security group, the entire organisation (excepting specific security groups) or allowed for particular security groups, excluding all others in the organisation. Row-Level Security (RLS) allows report developers to restrict data, based on Roles. Row-level DAX evaluation formulas are used to achieve this, which filters the data that is returned, depending on a TRUE/FALSE logic test. To utilise the feature, you must define both the Roles and DAX formulas for each query within your data model. Then, after deploying your Report to Power BI Online, you then assign Users or Security Groups to the Role(s) created within Power BI Desktop. It is possible to view the effect of a Role at any time, within Power BI Desktop or Online, via the View As Role functionality. With the wide-array of DAX formulas available, including specific ones that return the details for the current user accessing a Report, it is possible to define very granular filtering within a Power BI report, to suit particular security or access models. Putting some thought into Power BI\u0026rsquo;s security and access components early on when developing your solution will allow you to best take advantage of features such as RLS, which then benefit further when utilised alongside the other functionality described this week. The final post in the series next week will provide a more detailed description of Workspaces and how these can be used to create Apps for both internal and external consumption.\n","date":"2019-02-03T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-securing-power-bi-dashboards-reports-and-apps/","title":"Exam 70-778 Revision Notes: Securing Power BI Dashboards, Reports and Apps"},{"content":"Welcome to the eleventh post in my blog series concerning Microsoft Exam 70-778, where I hope to provide a revision tool for those planning to take the exam or a learning aid for those looking to increase their Power BI knowledge. In last week\u0026rsquo;s post, we covered a broad spectrum of topics ranging from Dashboards through to integrating on-premise data sources within Power BI Online. Dashboards are just one means of consuming published Power BI Reports, with a few additional options also available to help us include Power BI content within existing websites, intranet or on-premise deployments. The Publish and embed reports exam topic covers this, focusing on the following skill areas:\nPublish to web; publish to Microsoft SharePoint; publish reports to a Power BI Report Server\nLet\u0026rsquo;s dive into each of these specific areas of functionality and how to utilise them most effectively.\nPublish to Web The ability to publish Power BI content to any location on the web can be incredibly beneficial. Users can access Power BI information without needing to log into Power BI Online or even, necessarily, be licensed, allowing you to quickly include Reports as part of an existing website, blog or application. Although there is some trade-off from a functionality perspective, you can expect to do most of the things you\u0026rsquo;d expect a Power BI Report to do as standard. Keep in mind though; it\u0026rsquo;s only possible to publish Reports to the web, not Dashboards or Datasets. To do this, you navigate to the relevant Report within Power BI Online and select the File -\u0026gt; Publish to web option:\nAt this stage, note that you will be presented with additional dialog box prompts, indicated below, which emphasise two critical points to remember regarding the Publish to web option:\nANYONE on the web can access your Report once published, and there is also a high likelihood that crawler bots, including search engines, will also index and include your Report as part of search results. To ensure that only users within your organisation can access your report, consider using the Embed option as opposed to Publish to Web, which is available for users assigned a Power BI Professional license. If you wish to embed reports for internal users to access, then this is technically not allowed under the Power BI Free license. To use this functionality in a compliant manner, you must ensure that all internal users accessing the report are assigned a Power BI Professional license. Once sorting the various T\u0026amp;C\u0026rsquo;s, a URL and embed code is generated for you to use. A handy feature for less-technical users is the ability to modify the size of the IFrame, based on the options defined below:\nYou can then embed the IFrame snippet within any HTML page. For example, the below snippet would load a basic HTML document, with the Report fully rendered for access:\n\u0026lt;html\u0026gt; \u0026lt;p\u0026gt;\u0026lt;b\u0026gt;iFrame Test\u0026lt;/b\u0026gt;\u0026lt;p\u0026gt; \u0026lt;iframe width=\u0026#34;933\u0026#34; height=\u0026#34;700\u0026#34; src=\u0026#34;https://app.powerbi.com/view?r=aBjFVit61EihycCDLLtDAlMJeVQICkMIwvEFSbgosnAFFjy4kctLkRzqVxB3xZHfsNMO4GDDbzxGSbLfycHKUSEytFTe16sULAe61SsXzPdA2hhGWiTx7YTwf4o6\u0026#34; frameborder=\u0026#34;0\u0026#34; allowFullScreen=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/iframe\u0026gt; \u0026lt;/html\u0026gt; The rendered Report behaves as you would expect any other Power BI Report to work. You can navigate between pages, move your cursor over visualizations, drill up/down through them, view their tooltips and even apply filters. However, you are unable to see or export any underlying data or export the Report as a PowerPoint document/PDF. You also cannot Publish to web a report that:\nHas R visuals. Uses Row-level security. Has an on-premise SQL Server Analysis Services Tabular data source. Is shared with you. Putting aside the paramount data privacy and licensing concerns associated with this feature, it does nevertheless present a nice way of sharing out Power BI content for external users to access.\nPublish to SharePoint Intranet sites represent a potential deployment option for Power BI content, and the Publish to SharePoint Online option helps to accommodate this. Available for Power BI Professional subscriptions and online SharePoint deployments, this feature works similar to Publish to web and can be found nestled just next to this option on the File tab:\nOnce selected, Power BI will then generate a URL link, as indicated below:\nYour next step will then be to navigate to the SharePoint page that you wish to embed the Report onto and adding the appropriate control onto the page:\nAnd then populate the proper details on the right-hand pane when it appears:\nNote that you have a few options here to help customise how the Report renders. You can specify a default page, the page ratio and also toggle whether the Navigation or Filter panes are displayed.\nAn important thing to remember when working with this feature is to ensure that all SharePoint users who need to access the Report have been granted the relevant access permissions within Power BI; merely adding the Report into SharePoint will not automatically do this for you, and users will instead get an error message. The quickest way to grant these privileges is to ensure that all users are part of the Workspace/Office 365 Group where the Report resides. Alternatively, you can also specifically Share the Report out to all required users, but this could prove cumbersome to manage for larger deployments.\nPublish to Power BI Report Server Organisations that are not quite ready to start using Power BI within the cloud-based offering can still potentially benefit from all the goodness that the solution has to offer. If you have an active SQL Server Enterprise license agreement with Software Assurance or a Power BI Premium subscription, then Microsoft allows you to download a fully licensed copy of Power BI Report Server. This application is, in effect, an on-premise version of Power BI Online, provideing sufficient feature parity alongside some nice little extras thrown in. From a technical standpoint, the solution is a retooled version of SQL Server Reporting Services (SSRS); and, as a consequence, you have the full capability to deploy out SSRS reports onto Power BI Report Server. Functionality like this may represent a significant boon, allowing organisations to leverage their existing reporting solutions while enabling them to start developing new reports using Power BI as well. Microsoft also provides a Developer edition of the application, which is really neat. 🙂\nPublishing a Report to Power BI Report Server has some significant differences when compared to the Online version. Because the solution does not benefit from the same release cadence as the online offering, you must use a separate Power BI Desktop application when developing for Power BI Report Server. The latest release from January 2019 (at the time of writing this) is available for download, but will not necessarily have the same monthly release cycles compared with the \u0026ldquo;normal\u0026rdquo; Power BI Desktop application. As such, be aware that you may encounter issues working with Reports developed between both Desktop applications and, the general recommendation would be to ensure these are developed separately wherever possible.\nWhen getting a Report deployed out to Power BI Report Server, you must somewhat counter-intuitively navigate into the File -\u0026gt; Save as area of the application, where the appropriate option becomes visible:\nThen, when prompted, populate the URL field with the Web Portal URL value (derived from the Report Server Configuration Manager application):\nNext, specify the name of the folder where the Report will be saved. In the example below, there are no folders set up, so the Report will be deployed out to the default, root folder on the instance:\nIt may then take some time for the Report to publish out successfully\u0026hellip;\n\u0026hellip;at which point, you can then take a look at how the Report looks within the application. As the screenshot below indicates, the experience here is virtually identical when compared to Power BI Online:\nIf you ever need to make changes to your Report, you would have to revert to Power BI Desktop, make the appropriate changes and then repeat the steps above. Unlike Power BI Online, there is no option to modify reports from within a web browser. A small trade-off but, as we can see, Power BI Report Server does provide a complete Power BI experience that is tailorable to an organisation\u0026rsquo;s specific infrastructure/hosting requirements.\nKey Takeaways The Publish to web option allows for non-licensed, external users to view a Power BI Report in its entirety. A URL and IFrame embed code can be generated for this at any time within the portal and then dropped into virtually any website. Although you will lose some functionality when deploying a Report out in this manner, you can expect that users will be able to perform most types of interactions with visualizations, Report pages and other components, as if they were accessing the Report through Power BI Online. In some cases, you may be unable to use the Publish to web option if your Report uses certain kinds of features, such as R Visuals or row-level security. You must also take into account any privacy or data protection concerns, as Reports deployed out in this manner will be publically accessible; where this is an issue, the Embed option is available as a secure alternative. There are three steps involved if you wish to add a Report to SharePoint. First, you must generate the unique SharePoint embed URL within Power BI. Secondly, you then need to add on the dedicated control for this feature on your target SharePoint page and configure the relevant display options. Finally, you then need to ensure that all SharePoint users have been granted access to the Report, either at a Workspace level (recommended option) or by having the Report shared with them. By implication, in this scenario, all SharePoint users would have to have at least a Power BI Professional license to take full advantage of this functionality. Publishing a Report to Power BI Report Server is mostly the same as if you were to do the same with the online version of the product. Instead of selecting a Workspace to add the Report to you, specify the name of the Report Server folder where the Report will reside. From a development standpoint, the dedicated Power BI Desktop for Power BI Report Server must be used and may differ in functionality from the \u0026ldquo;normal\u0026rdquo; version of the tool. There is also no option to edit a report from within Power BI Report Server like you can through the online version. We saw previously how Power BI Embedded presents a means of integrating Power BI visualizations as part of a bespoke application, but the options offered in this post today do represent a potentially quicker and simpler alternative to achieving the same ends. In this series penultimate post next week, we will dive deeper into the various security options available as part of Power BI, to help ensure that both reports - and any underlying data - can be \u0026ldquo;hardened\u0026rdquo; to suit a variety of different scenarios.\n","date":"2019-01-27T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-publish-and-embed-reports/","title":"Exam 70-778 Revision Notes: Publish and Embed Reports"},{"content":"Welcome to my tenth post in a blog series aimed to provide a revision tool for Microsoft Exam 70-778, and for those looking to increase their expertise in Power BI. In last week\u0026rsquo;s post, we explored the possibilities developers have to leverage Power BI within their applications and how the Power BI API relates to all this. As we now get into the home stretch, this weeks post combines two exam areas into one. The first topic, all concerning how to Access on-premises data with Power BI, covers the following skills areas:\nConnect to a data source by using a data gateway; publish reports to the Power BI service from Power BI Desktop; edit Power BI Service reports by using Power BI desktop.\nThen, we\u0026rsquo;ll look at how to Configure a dashboard and do the following with them:\nAdd text and images; filter dashboards; dashboard settings; customize the URL and title; enable natural language queries\nPower BI Gateway A typical obstacle that can prevent an organisation from wholly adopting a cloud solution is the need to retain existing on-premise systems. A myriad of potential concerns may be involved here, such as regulatory or contractual arrangements. As a Software as a Service (SaaS) offering, Power BI is no different in this regard, thereby presenting potential obstacles for its adoption. The online element of the solution is designed to interact with data sources that are \u0026ldquo;in-cloud\u0026rdquo;, such as Azure SQL databases or Dynamics 365 Customer Engagement. If, for example, you need to access a data warehousing solution residing within an on-premise IBM DB2 instance, this is not necessarily something that can be \u0026ldquo;opened up\u0026rdquo; for access.\nTo address these concerns in a streamlined manner, Microsoft makes available the On-Premise Gateway application, which provides an easy to install and configure way of interacting with on-premise data sources. Built initially with Power BI in mind, the solution is now extended to support by the various services that make up the Power Platform, such as Microsoft Flow and PowerApps. The list of supported on-premise data sources is extensive, covering well-known systems such as:\nSQL Server Active Directory SharePoint MySQL IBM DB2 A general recommendation when installing the gateway is that the target machine should remain consistently online and have the proper access to all of your on-premise data sources.\nTo get up and running with the on-premise gateway, you must first download and install the corresponding client from Power BI online, by clicking on the Download button in the top right area of the application (the icon that has a downward arrow and a line):\nClicking this will navigate you to the dedicated information page for the Gateway (which is accessible via a direct link too), where the very obviously placed button will let you download the application:\nOnce downloaded and installed, some additional configuration is then required:\nYou must specify the type of gateway to deploy - either one that uses the Recommended configuration or one configured for Personal Mode. Recommended will be the one that you generally need to go for, with Personal Mode only being useful for testing purposes or when developing on a local machine. You will need to sign into the Office 365 tenant that contains the Power BI Subscription for your new gateway, making sure that the account in question has sufficient privileges to register one. Once successfully signed in, you can then chose to deploy a new gateway or a replacement one: Next, you will need to define a name for your gateway and also a recovery key. Make sure this is noted down, as you will require it if the gateway requires modification in future. The application should also determine the most appropriate Azure region to associate itself with, but this can be adjusted if needed. Keep in mind though that doing so may make it incompatible with certain services: With everything configured correctly, you can then confirm the status of the gateway by navigating to the appropriate tab on the application:\nIts successful creation can then be verified by going into Power BI Online, where it should be visible in the Manage gateways settings area:\nOnce implemented, you can then press the ADD DATA SOURCE button to start adding your on-premise connections. The screenshot below shows an example of how to configure access to the WideWorldImporters sample database on an on-premise SQL Server instance:\nThe gateway is an essential tool to have within your arsenal if you are attempting to leverage the benefits of Power BI Online more quickly, but find yourself restrained by existing, on-premise data sources; once deployed, it straightforwardly allows you to have the best of both worlds.\nPublishing Reports Although it is perfectly acceptable for users to work and interact with their Reports from within the Power BI Desktop application, Reports that typically require more general consumption or utilisation by non-technical audiences will require deploying out to either a) Power BI Online or b) a Power BI Report Server instance. The steps involved in the second one are a little different (and require access to a separate Power BI Desktop application), but for the first route, it is merely just a case of clicking on the Publish icon on the main application ribbon once your Report is ready:\nIf you have access to several Workspaces within Power BI, you must first specify which workspace the Report will appear in after publishing:\nDepending on the size of your Report, this may take some time to upload, but a helpful dialog box will keep you informed on progress:\nOnce published, you will be provided with a hyperlink that opens the Report within your web browser, where both a new Report and corresponding Dataset will exist:\nOnce deployed, a Report can be modified in one of three ways:\nBy updating the existing Report within Power BI Desktop and then re-publishing your changes; all current Reports/Datasets will then refresh accordingly. If the original .pbix file is unavailable on your local machine, then you can go to File -\u0026gt; Download report (Preview) to get a copy of the file: This can then be modified and re-published in the manner already described. From directly within the Browser. This route provides a similar experience to Power BI Desktop but optimised for browser use. I would not generally recommend altering reports in this manner, especially if your Reports are subject to strict version control policies. Working with Dashboards Power BI Reports are similar to books in many ways, in that they cater towards more detailed analysis and precise drill-down capability. For situations where executive or senior level individuals require an at a glance view of the information that is most relevant for their needs, a Dashboard represents the optimal choice to include the data that interests them the most, while still allowing them to drill-down further if required. They also afford additional functionality that assists from a presentation standpoint, by supporting the ability to include images, hyperlinks, videos and even custom streaming datasets.\nIt is not possible to create Dashboards from within Power BI Desktop. Instead, you must generate them from a Workspace within Power BI online by selecting the + Create button at the top right of the screen, choosing the corresponding Dashboard option and by finally providing an (ideally descriptive) name for it:\nYou have many different options available when working with a Dashboard, either from an administrative or end-user perspective and these are all accessible from the Dashboard ribbon, shown below and explained in the bullet points that follow:\nAdd tile: Adds various tiles onto your Dashboard, such as web content, videos, custom streaming data sources, images or text boxes. Comments: Displays and lets you add comments to the dashboard, which can be viewed by all other users who have access to the Dashboard. View related: Shows a list of all Reports and Datasets that are associated with the Dashboard. Set as featured: When selected, Power BI will always display this Dashboard first when you login. Favorite: Adds the Dashboard to your favourites list. Subscribe: Allows you to configure email alerts whenever a refresh of data occurs on the dashboard, which will include the Dashboard content and a link to access it. Multiple subscriptions can be set up in this manner. This feature is only available if you have a Power BI Professional subscription. Share: Allows you to share data with other users within or outside your organisation. Once shared, you can then define the access level that these users have - Read or Read and reshare. Access can be revoked at any time. This feature is only available if you have a Power BI Professional subscription. Web view: Lets you toggle between either a Web View or Phone View of the Dashboard, allowing you to verify how the Dashboard renders on different device types. Dashboard theme: Enables you to change the current Theme for the Dashboard. New themes can also be defined using the Custom Theme designer or by uploading a JSON file. If you have already designed your own custom Report theme file, as discussed earlier in this series, then this can be uploaded here too. Duplicate dashboard: Creates a copy of the Dashboard within the current workspace, using a name you specify. Print Dashboard: Lets you print the currently displayed Dashboard to a physical printer or PDF. Refresh dashboard tiles: Forcibly updates all dashboard tiles to return the latest available data. Performance inspector: This will display some KPI type recommendations relating to your Dashboard, advising on elements such as network latency, choice of tiles and other information which is useful when fine-tuning Dashboard performance. Settings: Lets you modify settings for the Dashboard, such as Q\u0026amp;A capabilities, whether Comments are allowed or the default behaviour when moving tiles around. The Dashboard can also be renamed here. As mentioned already, accessing the + Add tile button on the ribbon shows all the options available for adding content onto your Dashboard:\nThe next few sections will primarily focus on the three most used options - visualizations from a report, text boxes and images.\nVisualizations Any Report visualization can be pinned to a Dashboard, provided that they exist in a report within the same Workspace. To do this, navigate to the visualization in question and click on the pin icon in its top right corner. You\u0026rsquo;ll be asked to specify what theme is used for the visualization and also which Dashboard to add it to. Once decided and pinned, you can then navigate to your dashboard to see how this looks:\nIf there is a requirement to filter a visualization first before pinning it to a Dashboard, then you must do this within the Report, as there is no option to filter visualizations after they are embedded as a tile. A great alternative to get around this is discussed on the PowerDAX website, which involves the use of slicers.\nText Boxes This tile type should be reasonably self-explanatory, but it is worth highlighting the additional options available here:\nTitles/subtitles can be specified and, optionally, displayed. Text can be rendered using rich-text editor capabilities. It is possible to add a hyperlink to an external link, another Dashboard or a Report page, that redirects users accordingly after clicking the tile. Images The list of available properties when configuring an image tile is mostly the same as text boxes. The picture you wish to display must derive from a Web URL; there is no option to upload an image file. A good candidate for hosting any image could include a publically available OneDrive folder or an Azure Blob Storage location:\nRearranging Dashboard Content It is possible to drag, drop, shorten and widen tiles on a Dashboard at any time. The two screenshots below show how it is possible to do this to suit any potential layout requirement you may have:\nKey Takeaways\nThe Power BI On-Premise Gateway provides a streamlined route to working with non-cloud data sources within Power BI, Microsoft Flow and PowerApps. As a lightweight and easy-to-configure client application, it supports a wide variety of data sources, making them accessible as if they were in the cloud. Once set up, corresponding Data Sources are then made available for configuration and for leveraging as part of any Power BI Dataset. Reports can be published into Power BI Online, meaning that they become accessible online and to a broader group of users, without requiring access to Power BI Desktop. Reports need deploying into a Workspace, which can be created manually or derived from an Office 365 Group. Each Report contains a corresponding Dataset, where all queries defined within Power BI Desktop exist. Reports that already exist on Power BI Online can be updated by just publishing a new version of the Report from Power BI Desktop. It is also possible to modify Reports from directly within the browser and by downloading a copy of the .pbix Report file as well, which can then be altered and re-published. Dashboards provide a means of grouping together various content as tiles, designed for at-a-glance analysis and optimal end-user experience. The list of content that can be pinned to a Dashboard includes: Visualizations Web content Images Text boxes Videos Custom streaming data Pinned content can be re-arranged on Dashboard via drag and drop functionality. It is also possible to resize tiles to any given height/width. Within the settings of a Dashboard, it is possible to enable/disable features such as natural language queries (Q\u0026amp;A\u0026rsquo;s) and Notes. Some features of a Dashboard are only available if you have a Power BI Professional subscription, such as sharing and email subscriptions. We\u0026rsquo;ve covered a lot in today\u0026rsquo;s post and jumped around two distinct functionality areas within Power BI, both of which have arguable importance in their own rights. Next weeks post will hopefully be a lot easier to digest, as we evaluate the options available to publish Power BI Reports to a variety of locations, such as for public access or within SharePoint.\n","date":"2019-01-20T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-report-publishing-on-premise-gateway-creating-dashboards/","title":"Exam 70-778 Revision Notes: Report Publishing, On-Premise Gateway \u0026 Creating Dashboards"},{"content":"Welcome to post number nine in my series designed to provide a revision tool for Microsoft Exam 70-778, and for those looking to increase their expertise in Power BI. The topics we have covered so far in the series have all involved Power BI Desktop primarily, and we now move away from this as we evaluate how to Manage Custom Reporting Solutions with Power BI. This focus area for the exam measures the following skill areas:\nConfigure and access Microsoft Power BI Embedded; enable developers to create and edit reports through custom applications; enable developers to embed reports in applications; use the Power BI API to push data into a Power BI dataset; enable developers to create custom visual.\nDespite being, at first glance, a very technically focused area, it is not necessarily a requirement for the exam to know how to work with these features in-depth. However, what this post will try to do is fully explain what Power BI Embedded is (and how it can be tailored accordingly), the capabilities and benefits of the Power BI API and, finally, what options you have available to build custom visualizations, that are then available for use across any Power BI Report.\nPower BI Embedded A potential limitation of using Power BI as your Business Intelligence solution is that you must access your reporting solution through one of the two interfaces, depending on how you have licensed the product:\nThrough the Power BI Online portal. By accessing a locally/cloud deployed Power BI Report Server instance. For strictly organisational only access, this is all fine and dandy; but if you desire to grant external users access to your reports, it would be necessary to open up a door into a critical component of your IT infrastructure, often in tandem with any other systems your solution may contain. For example, if you have developed a support portal for your customers to submit cases with and wish to provide them with a range of Power BI visualizations, you would need to grant and deploy access to two, separate application instances - your support portal and Power BI Online/Report Server. This can lead to a jarring end-user experience and severely limit your capabilities in providing a unified, bespoke and branded solution.\nPower BI Embedded seeks to address this scenario by providing application developers the means to embed Power BI reports and visualizations directly within their existing applications. The experience this offers is seamless, to the extent that end-users will not even need to know that you are using Power BI at all. Consequently, this potentially allows you to look exceptionally gifted when you begin to deploy engaging and highly interactive visualizations into your application quickly. As an Azure-based service with a pricing mechanism to suit, you only need to suitably estimate your potential compute capacities, deploy your capacity and any corresponding reports and then build out the appropriate link to your Power BI content within your application.\nTo get started with using Power BI Embedded, you need to make sure you have the following at your disposal:\nAn Azure Active Directory tenant, with a corresponding Azure subscription. Users who require the ability to develop reports must have a Power BI Professional licenses assigned to them. Some configuration within Azure and Power BI will be required, namely, the setting up of an App Registration and a Workspace within Power BI Online. Microsoft provides an easy to follow tutorial article, that thoroughly explains all of the steps needed to configure this successfully. Access to Power BI desktop and a report that you wish to work with, which needs publishing into a relevant Workspace on Power BI Online. To get a feel for the capabilities on offer as part of this offering, you can go to the Power BI Embedded Playground, made available courtesy of Microsoft. This tool allows you to test how the various Power BI Embedded components render themselves, tweak around with their appearance and generate working code samples that are usable within your application. The screenshot below shows an example of how a single Report visual would look when embedding it into a bespoke application:\nAs the screenshot indicates, there is no loss in core functionality when consuming Power BI in this manner. You can hover over individual areas to gain insights into your data; you can drill-down through the data; data is sortable in the conventional manner; and, finally, you can view the underlying data in the visualization or even export it out into an Excel/CSV document. Also, you have extensive options available that can be used to modify how a visual, report etc. is rendered on your application page, allowing you to ensure that all rendering completes most optimally for your application.\nAll in all, Power BI Embedded represents a significant boon for application developers, enabling them to very quickly leverage the extensive reporting capabilities Power BI provides, all of which is cloud-based, highly scalable and minutely tailorable. It is important to highlight that all of this goodness comes with a potentially high cost, namely, that of requiring a sufficiently proficient application developer (preferably .NET focused) to join all of the various bits together. But, if you are already in the position where you have developed an extensive range of Power BI reports for consumption by your customer base, Power BI Embedded is the natural progression point in turning your solution into a real piece of intellectual property.\nThe Power BI API If you are finding your feet with Power BI Embedded and need to look at carrying out more complex actions against Power BI content that is pulling through from a workspace, then the API is an area that you will need to gain familiarity in working with too. Microsoft exposes a whole range of functionality as part of the Power BI API, that can assist in a wide variety of tasks - such as automation, deployment and allowing any bespoke application to further leverage benefits out of their Power BI embedded solution. Some examples of the types of things you can do with the API include:\nOverriding the connection details for a data source to an alternate location and patching the credentials of these accordingly. For example, you could have an automated deployment script in AzureDevOps that can manage the deployment of a report from development, to UAT, through to Production. Create a new data source within Power BI and add data to it. The ability to insert data in this manner in no doubt useful, but more complex requirements will typically require the implementation of a formal Extract, Transform and Load (ETL) process. Build a dashboard, clone its contents or make copies of it. Add a new data source that is accessible through the On-Premise Gateway (more on this subject next week) Return details of a report, delete it, copy it or even update it entirely, based on a supplied .pbix file in the request body. The API requires that you authenticate against the Power BI service, using a corresponding Application Registration on Azure Active Directory, which defines the list of privileges that can be carried out. This component can be straightforwardly created using the wizard provided by Microsoft, and a full tutorial is also available on how to generate an access token from your newly created Application Registration. The key thing as part of all of this is to ensure that your Application Registration is scoped for only the permissions you require (these can be changed in future if needed) and not to grant all permissions needlessly.\nBecause the API is a REST endpoint, there are a variety of programming languages or tools that you can use from an interaction standpoint. PowerShell is an especially good candidate for this and, in the snippet below, you can see how this can be used to modify the User Name and Password for a SQL Server data source deployed onto Power BI Online:\n#Make the request to patch the Data Source with the updated credentials $sqluser = \u0026#34;MyUserName\u0026#34; $sqlPass = \u0026#34;P@ssw0rd!\u0026#34; $patchURI = \u0026#34;https://api.powerbi.com/v1.0/myorg/gateways/cfafbeb1-8037-4d0c-896e-a46fb27ff229/datasources/1e8176ec-b01c-4a34-afad-e001ce1a28dd/\u0026#34; $person = @{ credentialType=\u0026#39;Basic\u0026#39; basicCredentials=@{ username=$sqluser password=$sqlpass } } $personJSON = $person | ConvertTo-Json $request3 = Invoke-RestMethod -Uri $patchURI -Headers $authHeader -Method PATCH -Verbose -Body $personJSON This example deliberately excludes some of the previous steps needed to authenticate with Power BI and is, therefore, provided for strictly illustrative purposes only.\nCreating Custom Visuals Developers have access to primarily two options when it comes to building out bespoke visualizations, which are then includable in a Power BI Online, Embedded and Report Server report:\nUsing Node.js to build bespoke visuals, which can then be packaged up for distribution or even uploaded onto AppSource. Building out bespoke visuals using R, typically for consumption for statistical or analytical purposes. Last week\u0026rsquo;s post discussed this topic in more detail from an exam standpoint which, in a nutshell, only requires you to have a general awareness of the options available here; no need to start extensively learning a new programming language, unless you really want to. 🙂\nKey Takeaways Power BI Embedded is an Azure hosted offering that allows you add Power BI Report content into bespoke applications. This deployment option can be incredibly useful if you wish to make available your Power BI solution to users outside of your organisation or if you have an existing, bespoke application system that can benefit from utilising Power BI content. An Azure subscription is required to begin working with Power BI Embedded and you are billed based on node size, not individual user licenses. All Power BI content requires publishing to the online service before its contents become available for Power BI Embedded to access. Report developers will, therefore, need granting a Power BI Professional license to carry out these activities. The Power BI API grants access to developers to perform automation or administrative actions programmatically against the Power BI Online service. Utilising a REST API, developers can determine the optimal programming language of choice to interact with the API, allowing them to streamline the deployment of Reports or Dashboards to the Power BI service or leverage additional functionality when utilising Power BI Embedded. The API can also cater to specific data load requirements, although more complex needs in this area would require addressing via alternate means (SSIS, Azure Data Factory etc.) Developers can add their own bespoke visualizations to a Power BI Report by either developing them using Node.js or using the R language. The first of these options facilitate a more streamlined deployment mechanism and allows developers to add their visualizations to AppSource, whereas the second option may be more useful for complex visualization types with an analytical or statistical function. Compared to the other exam topics, a general awareness of these concepts is more than likely sufficient from a learning perspective and is (arguably) useful knowledge in any event, as it allows you to understand how developers can further extend a Power BI solution to suit a particular business need. In next weeks post, we will move into the final subject area for the exam, as the focus shifts towards how to work with Power BI outside of the Desktop application and the various tools available to integrate on-premise data sources into Power BI Online.\n","date":"2019-01-13T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-managing-custom-reporting-solutions/","title":"Exam 70-778 Revision Notes: Managing Custom Reporting Solutions"},{"content":"Happy New Year! As 2019 dawns upon us, Microsoft Business Application professionals start in the right place, as the concept of the Power Platform starts to take hold. Through this, it is pleasing to observe more consistency across these range of products, with regular releases, increased integration and better learning tools, provided directly from Microsoft. I\u0026rsquo;ve mentioned this previously on the blog, but it is worth emphasising again the increased importance Power BI has from a Dynamics CRM/365 Customer Engagement standpoint. With this in mind, having a New Years resolution to learn more about it and to earn a technical qualification in the subject will hold you in good stead in future. If you are reading this now, then hopefully you already have this resolution. 🙂\nToday\u0026rsquo;s post will continue my series focused on providing a revision tool for Microsoft Exam 70-778. This week, we move into the broad subject area Create and format interactive visualizations, which revolves around the following skill areas:\nSelect a visualization type; configure page layout and formatting; configure interactions between visual; configure duplicate pages; handle categories that have no data; configure default summarization and data category of columns; position, align, and sort visuals; enable and integrate R visuals; format measures; Use bookmarks and themes for reports\nLet\u0026rsquo;s start by providing an overview of just what a visualization is, before deep-diving into the specific topic areas listed above. The examples provided in this post refer to the latest Power BI Desktop sample report 2018SU12 Blog Demo - December.pbix, which can be downloaded from GitHub using this link.\nVisualization Overview The majority of topics covered in this series have all concerned the foundations of a successful Power BI report - the data sources, the data quality enhancement work and the required DAX wizardry to create custom columns, Measures or table objects to supplement any requisite Power Query manipulation. With the necessary foundations, walls and various utilities built for your Power BI \u0026ldquo;house\u0026rdquo;, the final and most important topic concerns the decoration - creating impactful and meaningful visualizations that help to display your data most appropriately. The great thing about using Power BI as your Business Intelligence tool is the vast array of default and custom visualizations that are available when developing a report. The Visualizations pane on the Report tab lists all of the possible visuals available for your report, described further in the list that follows (in order, left to right, top to bottom):\nStacked bar chart Stacked column chart Clustered bar chart Clustered column chart 100% stacked bar chart 100% stacked column chart Line chart Area chart Stacked area chart Line and stacked column chart Ribbon chart Waterfall chart Scatter chart Pie chart Donut chart Treemap Map Filled map Funnel Gauge Card Multi-row card KPI Slicer Table Matrix R script visual ArcGIS Maps for Power BI Power BI also supports custom visuals, provided by ISV\u0026rsquo;s or Node.js developers, that allow you to include additional visualizations in your report. You can work with these by clicking on the ellipsis icon on the Visualizations pane:\nThere are two web links relating to custom visuals that are worth considering further:\nThe Business Apps marketplace, accessible also via the Import from marketplace button, lets you either add new custom visuals directly into Power BI or download versions that you can then import using the Import from file button. There are a lot of great visuals available here, that can help to supplement your existing reports and take some of the aggravation out when implementing more complex requirements (e.g. displaying Gannt chart visuals). The Developing a Power BI custom visual Microsoft Docs Tutorial walks you through the required steps to build out a custom visual using Node.js. Just dragging and dropping a visual onto a report and adding a few field values may not be enough to meet a specific business requirement. For this reason, you should consider the following when working with visualizations:\nDoes the visual require sorting in ascending, descending or by a particular column order? If so, then clicking on the ellipses button at the top right of the visualization will expose several sortation options, which may differ based on the underlying dataset: Are there blank categories within your data? If so, you may encounter a similar issue as indicated in the screenshot below, with a (Blank) category value: Resolutions to this problem can vary - for example, you could go back into your query and add a default value for all blank columns values - but a quick way to potentially fix this is to click on the down arrow next to the field and select the Show items with no data option: Does the default summarization need to be overridden for the Values field? We\u0026rsquo;ve seen previously in this series how it is possible to specify the default summarization for each column in your dataset. On occasions where this needs changing, you can again use the right arrow next to the field to carry this out: Finally, you also have some additional options available by selecting the Format tab with a visual selected:\nThe options above the Arrange heading should be self-explanatory, but it is worth focusing on the Edit interactions button. An expected experience with Power BI is, as you begin to filter visualizations, others on the report update accordingly by applying the same filter. This behaviour can be changed using the Edit interactions button, allowing you to specify whether other visuals on the report:\nApply a cross-filter Apply a cross-highlight Do nothing The default action for most visualizations is to apply a cross-filter. The sequence below demonstrates how this can be disabled using the Edit interactions button:\nA handy feature to have at your disposal, further details regarding the Edit interactions button can be found on the Change how visuals interact in a Power BI report Microsoft Docs article.\nDeciding Which Visualization To Use Both for a real-life and exam scenario, you should be prepared to identify when a visualization will be appropriate to use, based on a stated list of requirements. As a general rule of thumb, if:\nYou need to compare data between different categories, then use a Bar/Column or Ribbon Chart. The requirement is to compare data values across a date range, then use a Line or Area Chart. You are working with a dataset that contains multiple fields with wide value ranges, then use a Combo Chart. There is a need to show significant variances across a set of data or to highlight significant amounts in comparison to others, then use a Waterfall Chart. You are working with two metrics that have a relationship between them and you need to visualise diverse value types, then use a Scatter Chart. Data needs to be grouped by a distinct category and shown as part of a whole value, then use a Pie or Doughnut Chart. The requirement it to distinctively show proportions of an overall part and, by association, the most significant/smallest contributors, then use a Treemap visualization. Your data is geographically based, and there is a desire to provide drill-down capability, then use a Map, Filled Map or ArcGIS Maps for Power BI visualization. You are working with data based on distinct stages (e.g. Lead data from Dynamics CRM/365 Customer Engagement, grouped by Business Process stage), then use a Funnel visualization. The data needs to be consumed via a single value or viewed as represented in the underlying data, then use a Card, Multi-Row, Table or Matrix visualization. There is a need to provide users with the capability to filter data \u0026ldquo;on the fly\u0026rdquo;, then use a Slicer visualization. An R script requires incorporation into your report, use an R visualization (discussed in further detail later on in this post). You need to report data concerning Key Performance Indicator (KPI) monitoring, then a KPI or Gauge visualization should be chosen. These are both discussed in greater detail in my previous post on this subject. Report Page Options Visualizations form an essential part of the look and feel of a report, but further options are also available from a report design perspective. With a report page selected and with the Format paintbrush icon chosen (which is accessible in the same way as described in section Visualization Format Settings in my post concerning KPIs), you have access to additional options relating to the currently selected page:\nThe options available here include:\nPage Information: Here, you can change the name of the page, as well as enable/disable the Tooltip and Q\u0026amp;A features for this page. Page Size: The options here let you adjust the size of the page to one of several options: 16:9 (1280 x 720) 4:3 (960 x 720) Cortana (296 x 592) Letter (816 x 1056) Custom Page Background: From here, you can change the background colour of the page and adjust its transparency. It is also possible to add a background image here too. Page Wallpaper: Potentially confusing when compared against the Page Background options, the options in this section let you adjust the colour outside of the main page area. The best way of understanding how this looks is to take a look at the following garish example, which shows a page that has had both background and wallpaper colours specified: Never develop a report that looks like this, by the way. 🙂\nWhen it comes to working with multiple pages, you also have the following options available, accessible through right-clicking a page:\nFormatting Measures Measures, as with other column types (derived either from a query or a DAX formula), can be formatted in numerous different ways. The steps involved here do not differ significantly from the options discussed in the Formatting Columns section of my previous post concerning data model optimisation; select the Measure in question from the Fields pane, navigate to the Modeling tab and the appropriate options will be made available to you.\nR Visuals R provides developers with the means of building highly sophisticated and bespoke visualizations, that will typically be consumed for statistical analysis. These can be added to Power BI Desktop and linked to any data source in your report. There are two necessary components required to start working with R visuals in this manner:\nYou must install R on your local machine. There are many versions available, with R Open 3.5.1 from Microsoft being the logical choice for beginners. Within the Options area of Power BI Desktop, you must verify that the correct Detected R home directories dropdown is selected. If you have installed R Open 3.5.1 on your machine, then this should be filled for you automatically, as follows: (You may also be prompted to Enable script visuals, as indicated by the dialog box below; ensure that the Enable option is selected)\nBeyond the basics of getting started with R in Power BI, which is all that is required from an exam perspective, this is a topic too complex to cover in this blog post.\nBookmarks The ability to pre-configure a report page, from a filtering standpoint, can be incredibly useful for those who consume a Power BI report. Bookmarks seek to address this need, by allowing developers to build a guided \u0026ldquo;story\u0026rdquo; in their report. All that is required is for a report page to be filtered accordingly and then for a Bookmark to be added - couldn\u0026rsquo;t be simpler! For example, the 2018SU12 Blog Demo - December.pbix sample report contains several Bookmarks, accessed by opening the Bookmarks Pane on the View tab:\nWith the Selection pane also enabled, you can then easily navigate between all Bookmarks by clicking the View button above and using the arrow icons on the bottom of the page:\nThrough the Bookmarks pane, it is also possible to re-order Bookmarks and to group them by a category, such as a page. Simple to setup, but powerful when utilised, they are a feature which I think gets overlooked and should be considered if you are building out a report for beginner Power BI users.\nThemes A feature that has typically been available with every Microsoft Business Intelligence (BI) application is extensive design capabilities, atypically to suit any bespoke branding requirements that an organisation may have. Power BI is no different in this regard, because, as well as being able to specify the colour of visualizations individually, developers also can define a top-level Theme that will automatically apply to all report visualizations. The Switch Themes button provides users with the ability to modify their Report theme at any time, located on the Home tab:\nAs shown in the image above, you can:\nChange the Theme from Default to one of the other include Theme definitions within Power BI, such as Electric or High Contrast. Add a custom Theme to your report using the Import theme button. Browse the Power BI Community Theme gallery to download a custom Theme developed by someone else. Get additional help on theming through the How to create a theme button, which links to the Use Report Themes in Power BI Desktop support article. The ability to import and develop bespoke Themes is a topic that requires further discussion. All Themes are defined as JSON files, which outline the different hex colour values that Power BI allocates for each visualization type. There is a hell of a lot of options available here, depending on how masochistic you are feeling stringent your branding requirements need to be. For the exam and most real-life scenarios, a straightforward JSON file (provided courtesy of Microsoft) may resemble the below:\n{ \u0026#34;name\u0026#34;: \u0026#34;St Patricks Day\u0026#34;, \u0026#34;dataColors\u0026#34;: [\u0026#34;#568410\u0026#34;, \u0026#34;#3A6108\u0026#34;, \u0026#34;#70A322\u0026#34;, \u0026#34;#915203\u0026#34;, \u0026#34;#D79A12\u0026#34;, \u0026#34;#bb7711\u0026#34;, \u0026#34;#114400\u0026#34;, \u0026#34;#aacc66\u0026#34;], \u0026#34;background\u0026#34;:\u0026#34;#FFFFFF\u0026#34;, \u0026#34;foreground\u0026#34;: \u0026#34;#3A6108\u0026#34;, \u0026#34;tableAccent\u0026#34;: \u0026#34;#568410\u0026#34; } Once saved as a file with the name St Patricks Day.json and imported into the sample 2018SU12 Blog Demo - December.pbix report, we get this rather\u0026hellip;distinctive look:\nThe options available with Themes are always worth keeping in the back of your mind and, chances are, they can provide the means towards ensuring consistently branded Power BI reports.\nKey Takeaways Power BI delivers, out of the box, a range of different visualizations that cater towards most (if not all) reporting requirements. Should you find yourself in need of additional visualizations, then Microsoft AppSource is your go-to destination for finding visualizations developed by others. If you have experience working with either Node.js or R, then these can be used to build bespoke visualizations also. When first developing a report, you should be able to match a requirement for a specific visualization type, to ensure that you are delivering a solution that is both meaningful and useful. From an exam perspective, this becomes a more critical consideration, and you should be prepared to suggest the most optimal visualization to use when given a specific scenario. After adding visualization\u0026rsquo;s to your report, you have additional options available to customise them further. For example, you can specify a preferred sorting order for your data, override any summarizations used and move/align your visual on the report page. By default, visualizations in Power BI are designed to change automatically, based on how users interact with the report. All of these options are controllable via the Edit interactions button, allowing you to specify your preferred cross-filtering and cross-highlighting conditions. There is a range of report page customisation options available to developers. It is possible to resize a page to any possible height/width, allowing you to optimise your report for specific devices. Also, you can modify the colour of a page (or its wallpaper) or add an image instead. Pages can also be renamed, reordered or duplicated. Measures can be formatted in the same way as calculated columns, meaning you can specify a data type or, for numerics, modify the number of decimal places. Bookmarks allow developers to set up \u0026ldquo;checkpoints\u0026rdquo; within a report, based on how a report page has been filtered. These can then be used to automatically navigate the user through a report, applying these filtering steps automatically. This feature can help transform your report into an interactive story. Visualizations will automatically inherit their various colour properties from the currently selected report theme. Although these can be modified granularly, the fastest and most consistent way of making these changes en masse is to change the Theme. Power BI includes some Themes out of the box, but you also have the option of building your own using a custom JSON file; this can then be imported into different reports, providing a portable means of enforcing a particular branding requirement. Visualizations are a HUGE topic for the exam, with a lot of detail that requires careful consideration. I hope this post has provided the right balance between highlighting the most critical areas, without going into minute detail. I would, therefore, urge you to go away and carry out studying yourself to gain a greater appreciation of this subject area. Next weeks post will be somewhat lighter reading, as we take a look at how application developers can integrate Power BI within their existing apps.\n","date":"2019-01-06T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-create-and-format-interactive-visualizations/","title":"Exam 70-778 Revision Notes: Create and Format Interactive Visualizations"},{"content":"The New Year is almost upon us, meaning its time to put in place some resolutions for the year ahead. I can think of no better commitment then to learn more about Power BI in 2019, which is hopefully the reason why you are reading this right now 🙂 . Welcome to the eighth post in my series concerning Microsoft Exam 70-778, where I hope to provide a learning/revision tool for anyone who is taking the exam or looking to increase their Power BI expertise. Last week, we investigated how to manage Key Performance Indicator (KPI) reporting using Power BI. We now move into another topic area that is also tied closely to visualizations - Create hierarchies. The related skills for this exam area are:\nCreate date hierarchies; create hierarchies based on business needs; add columns to tables to support desired hierarchy\nIn the sections that follow, I will provide an overview of the two different types of Hierarchies configurable within Power BI, before then providing a step-by-step guide on how to create one. As with previous posts in this series, this subject does bring together some other skill areas, such as importing data with Power Query and using DAX calculated columns. Knowledge of these topics will hold you in good stead when starting with hierarchies.\nWhat is a Hierarchy? Hierarchies form the cornerstone of a broad aspect of human existence, which makes it natural they are a common theme when it comes to Business Intelligence (BI) solutions. A hierarchy is definable as a logical, often visual, representation of an order of precedence. Hierarchies, in strict Power BI terms, do not differ much from this general definition; they allow developers to order data by preference, priority or anything in between. When configured and used in isolation, they offer minimal benefit to Power BI report consumers. They start to become really valuable when utilised alongside visualizations, providing an additional interaction point for key data points and allowing report users to tailor a visualization to suit their needs. Some potential usage scenarios for hierarchical data may include:\nProviding top-level and subcategory classifications for product records. Modelling a business reporting line hierarchy, from CEO down to Developer. Splitting sale date by quarter, year or even month. Hierarchies come in two flavours within Power BI - Date Hierarchies and Custom (or User Defined) Hierarchies. The next two sections delve deeper into the inner workings of each.\nDate Hierarchies Date Hierarchies are best described as a logical breakdown of the constituent components of a date value, based on four different levels:\nYear Quarter Month Day For each field that contains a Date Hierarchy, the appropriate value for each of the above is exposed out for utilisation across Power BI Desktop. So, for example, the underlying hierarchy values for the 3rd March 2017 would be:\nYear: 2017 Quarter: Qtr 1 Month: March Day: 3 All columns with a data type of Date or Date/Time will have a Date Hierarchy created for them automatically. The hierarchy will then become accessible for use in the following areas of Power BI Desktop:\nWhen building out formulas using DAX. For example, the screenshot below shows how it is possible to access the hierarchy field values listed above from the Date/Time field LastEditedWhen: When working with a Table visualization. Adding a Date or Date/Time column into the Values well will automatically create a table containing all of the hierarchy fields. An example of how this looks for the LastEditedWhen field is seen in the image below: It is possible to disable the automatic creation of Date Hierarchies within the File -\u0026gt; Options area of Power BI Desktop, by toggling the Auto Date/Time option. Take care when toggling this though, as doing so will remove any existing Date Hierarchies within your model:\nThis fact is confirmed when returning to the Table created using the LastEditedWhen field, which automatically reverts to the base column value when the Auto Date/Time option is disabled:\nCustom Hierarchies For more bespoke requirements, a Custom Hierarchy affords the same kind of functionality discussed in the previous section. They can be created from the right-hand Fields pane by selecting a field within a Power BI table and choosing the New hierarchy option:\nOnce established, you can then:\nRename the hierarchy. By default, it will be named using the convention Hierarchy. Include additional fields by dragging and dropping them into the hierarchy. Reorder the hierarchy, by dragging and dropping the fields into your preferred order, from top to bottom. Delete the hierarchy through right-clicking it and selecting the Delete option. All fields included in the Hierarchy will remain as part of the underlying query. A Custom Hierarchy typically relies on a self-relationship that a query might have on itself. An excellent example of this, already alluded to, is an Employee table, where an individuals Manager is trackable via a ManagerID column, that resolves back to an EmployeeID record. In these cases, there are several DAX functions available that will assist in getting custom columns set up to support your hierarchy, described collectively as Parent and Child Functions. Usage of these functions is essential when building out Hierarchies involving parent/child relationships.\nThe exercise at the end of this post will go through the detailed steps involved in creating a Custom Hierarchy.\nUsing Hierarchies with Visualizations A good reason to include hierarchies as part of your Power Bi Reports is the options they unlock from a data drill-down perspective. We\u0026rsquo;ve seen already the behaviour Date Hierarchies adopt when using the Table visualization, which offers nothing in this respect. Other visualization types support, by comparison, far richer options for \u0026ldquo;homing in\u0026rdquo; on a particular piece of data, epitomised by the following buttons that appear at the top of the supported visualization:\nIn order, from left to right, these let you:\nDrill up your data one level in the hierarchy. Enable the Click to turn on Drill Down feature, allowing you to click on any part of a visual to drill-down to the next level of the hierarchy. Go to the next level in the hierarchy. Expand all down one level in the hierarchy. This behaves differently to the Click to turn on Drill Down option mentioned already, by grouping together the previous hierarchy level each time you drill-down. An example of how all this works as part of a Pie chart visualization is viewable in the sequence below:\nAs should hopefully be clear, hierarchies are very much the icing on the cake in building an engaging and wholly interactive reporting solution within Power BI.\nExample: Creating a Hierarchy and adding it to a Visualization The steps that follow provide instructions on how to build out a product categorisation hierarchy, using sample data, and then how to apply this to a Donut chart visualization. All that is required to work through these steps is access to the Power BI Desktop application:\nIn Power BI Desktop, click on Edit Queries to open the Power Query Editor. Navigate to the New Source button and select the Blank Query option: Select the newly created Query1 in the Queries pane and click the Advanced Editor button. Copy and paste the following M code into the window and press Done: let Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText(\u0026#34;nZNNbsMgEIWvgrxqJRb438mybZZRonrpZoGScYNiYwvsqu11epOerBjstHKwE3WBhJj3vjdoIMsc18HO4/MabXjBOCzRijcgasEkoDtJy7qA+5sUgVrfXwEOCXF2OHO8kWsrqhykZBWnxRR5SmPYHo6jRLP9kS9tKD9QcZji2uqGGeE4cTUz6D0piDe2v7i85ThSS1tDtVm974+Uv0Kf+Vc3Uzojus0mz7sEPwptafPVgROPlC1rJinj2sBIfnVPIE9NVVsIlsrgX1yq8kqgNTtI9gnopSXEiyZe0X99iZ5mgheumaZL7LC0pEWBHlqpBiGlenBQsra80sF1k4kPfM+Ed38qPVIB24rxxjL32WKkYS4h2Pf623SfKT191KD7OXdiAd8iMgEhDn2F3/0A\u0026#34;, BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [ProductID = _t, Name = _t, #\u0026#34;Product ID\u0026#34; = _t, Parent = _t, TotalSales = _t]), #\u0026#34;Changed Type\u0026#34; = Table.TransformColumnTypes(Source,{{\u0026#34;ProductID\u0026#34;, Int64.Type}, {\u0026#34;Name\u0026#34;, type text}, {\u0026#34;Product ID\u0026#34;, type text}, {\u0026#34;Parent\u0026#34;, Int64.Type}, {\u0026#34;TotalSales\u0026#34;, Currency.Type}}), #\u0026#34;Replaced Value\u0026#34; = Table.ReplaceValue(#\u0026#34;Changed Type\u0026#34;,null,0,Replacer.ReplaceValue,{\u0026#34;TotalSales\u0026#34;}) in #\u0026#34;Replaced Value\u0026#34; This will then load up a table object that resembles the below, which derives from the sample Product data from Dynamics CRM/365 Customer Engagement:\nRight-click the Query1 query and use the Rename option to change its name to Products. Select the Close \u0026amp; Apply button to exit the Power Query Editor.\nOpen the Data pane, select the Products query and use the following DAX formulas to create New Columns for the table: ProductHierarchy = PATH(Products[ProductID], Products[Parent])\nProduct Category = LOOKUPVALUE(Products[Name], Products[ProductID], PATHITEM(Products[ProductHierarchy], 1, INTEGER) )\nProduct Grouping = LOOKUPVALUE(Products[Name], Products[ProductID], PATHITEM(Products[ProductHierarchy], 2, INTEGER) )\nBase Product = LOOKUPVALUE(Products[Name], Products[ProductID], PATHITEM(Products[ProductHierarchy], 3, INTEGER) )\nThe first field creates the required hierarchy list in a delimited format; the remaining columns then list out the three levels of the hierarchy as separate column values. The data in your table should resemble the below if done correctly:\nRight-click the Product Category field in the Fields pane and select New Hierarchy, to create a new hierarchy as indicated below: Modify the newly created hierarchy so that: It is named Product Hierarchy It contains, in addition to the Product Category field, the Product Grouping and Base Product fields. The order of the newly added fields reflect the following: Product Category Product Grouping Base Product If done correctly, the Hierarchy should resemble the following: Click on the Report tab and add an empty Donut chart visualization to the report. Populate the Field well values as shown below: The Donut chart should update accordingly, and with the appropriate drill-down options available at the top of the visualization. Notice also that hovering over each section of the Donut chart displays an aggregated total for that level of the hierarchy:\nKey Takeaways Hierarchies within Power BI provide a means of logically categorising data into an order of preference or precedence, providing greater flexibility to Power BI report users when they interact with visualizations. Date Hierarchies are created and managed automatically by Power BI for each Date or Date/Time field defined within your model. These automatically create fields that contain the Year, Quarter, Month \u0026amp; Day values from the respective date fields. These fields can then be utilised as part of a Table visualization or within a DAX formula. Date Hierarchies can also be completely disabled if required. Custom (or User-Defined) Hierarchies need to be created manually and provide additional customisation options when compared to Date Hierarchies, such as the number of fields they contain, the order and its name. A Custom Hierarchy will typically make use of one of several Parent/Child DAX functions, such as PATH or PATHITEM. Including a hierarchy as part of a chart visualization, such as a Pie chart or Donut chart, opens up other drill-down capabilities around your data. Indicated by the additional arrow icons included at the top of the visualization, they provide the means for users to interrogate data points that interest them the most straightforwardly. The past couple of posts have already started to introduce some of the visualization options available within Power BI. In next week\u0026rsquo;s post, we will take a much closer look at all of the options available within the application to display data, as well as look at features such as Bookmarks and the various report page customisation tasks available to developers when building out a report.\n","date":"2018-12-30T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-creating-hierarchies/","title":"Exam 70-778 Revision Notes: Creating Hierarchies"},{"content":"As we move into the festive period, now is the time to put your feet up, relax, take stock for the year ahead\u0026hellip;or, if you are reading this over Christmas, grab the opportunity to learn more about Power BI 🙂 . In the seventh post in my series concerning Microsoft Exam 70-778, we move away from the suitably large subject area of DAX into a topic much more visually focused and accessible - Measure performance by using KPIs, gauges and cards. The related skills for this exam area are:\nCalculate the actual; calculate the target; calculate actual to target; configure values for gauges; use the format settings to manually set values\nAs part of this week\u0026rsquo;s post, I hope to delve deeper into each of these skill areas, with the aim of providing a revision tool for the exam or a general learning aid for those getting to grips with Power BI for the first time. To follow on as part of any examples that follow, make sure you have downloaded the WideWorldImporters sample database and hooked up Power BI Desktop to the tables described in my first post in this series.\nKPI Overview A standard reporting requirement, and one which typically forms the basis of any contractual performance monitoring is the studying of Key Performance Indicator (KPI) figures. These are designed to provide an executive level view to answer questions such as \u0026ldquo;How is our sales pipeline progressing?\u0026rdquo; or \u0026ldquo;Have we met our objectives to reduce the number of monthly complaints down to X number?\u0026rdquo;. The whole process of collating together diffuse data sources to present this information may, in the past, have been exhausting. With Power BI, it is possible to deploy a dedicated KPI visualization that allows you to straightforwardly and consistently report headline figures, track variance over time and receive immediate visual cues to flag up any potential issues.\nTo get going with your first KPI visualization, you must supply two critical pieces of data:\nIndicator: This is the \u0026ldquo;actual\u0026rdquo; value that needs reporting against a target, and will typically be some form of aggregation contained within a Measure. Trend axis: Representing the performance of your indicator based on a range of data, the most typical axis type to use here would be some form of date value - such as month or year. A KPI configured with just these two properties will work but looks\u0026hellip;well\u0026hellip;bland and not very useful at all:\nThis is why a Target Goal should additionally be specified. Adding this will allow for the appropriate colour to be applied to the visualization, to indicate how well the Indicator is performing against your target. The great thing about this is that up to 2 Target Goal values can be specified if needed. You potentially lose some functionality here, as the Distance percentage value will no longer display correctly; but this may be useful if, for example, you have an aspirational target that there would be a benefit in hitting, but is not a strict requirement. When the value of one Target Goal is determined to be fulfilled, but the other isn\u0026rsquo;t, the Visualization will display Amber by default, as indicated below:\nThe example at the end of this post will walk through the steps involved in setting up a KPI.\nGauge Visualizations We got a sneak peek into Gauge\u0026rsquo;s when working with What-If Parameters in last week\u0026rsquo;s post, but a more in-depth discussion regarding them is necessary. Gauges provide a different means of viewing how a subset of data is performing against a target. They differ from KPI\u0026rsquo;s primarily due to the additional configuration options they afford and the potential they have to provide a faster means of interacting with key data points on the visualization. The configurable field well properties for a Gauge is viewable below, in the screenshot and descriptive bullet point provided:\nValue: This is the value that requires tracking, similar to the Indicator within KPI\u0026rsquo;s. Minimum value: The minimum amount that needs to be hit by the Value before the Gauge will be filled in. Maximum value: The maximum amount that needs to be hit by the Value before the Gauge stops filling (i.e. a Gauge with a Value of 105 and a Maximum value of 100 would display as full). Target value: The target that needs hitting, which will be presented as a black line on the Gauge, as shown below: Tooltips: These are additional field values that will be displayed to the user when hovering over the Gauge. These fields can either be aggregated variants of columns or a Measure. Gauges behave differently from KPI\u0026rsquo;s in that they will \u0026ldquo;work\u0026rdquo; with only one field well specified. For example, adding a Measure with a value of 8 million as the Value will display a half-full Gauge, as indicated below:\nNotice that, in this scenario, the Gauge assumes the maximum amount to be double of what is specified in the Value field well - in this case, 16 million. A Gauge configured so frugally has little purpose and you should, as a minimum, ensure that the Maximum value and Target value field wells are also set, as demonstrated below:\nA nice little feature of Gauges is the ability to specify any of the potential field well values manually, within the Formatting options (discussed in the next section). The Value field well must be populated already within your Gauge to support this feature. You will then have the ability to manually specify the other values within the Gauge axis options area:\nAny manually defined figure will be automatically overwritten and deleted if you choose to use a field from a table instead, so take care here to avoid any data loss.\nGauges address a similar need to KPI\u0026rsquo;s, in that they allow you to view possible progress against a maximum or target value. A significant downside in using them is that they present a less visual means of showcasing potential issues with your data. You should weigh these advantages/disadvantages up when determining whether to use a Gauge or a KPI as part of your report.\nVisualization Format Settings All Visualization\u0026rsquo;s within Power BI can be customised, often to the extent where they can look indistinguishable from when you first add them onto your report. The Formatting tab on the right-hand pane is your go-to destination in this regard, and is accessible by clicking the paintbrush icon while selecting any Visualization:\nThe types of things that are achievable via these options include, but are not limited to:\nModifying all aspects of the visualizations Title, such as its visibility, the text displayed, font colour/size/type and alignment. Adding a colour background to the visualization. Including a description for the visualization. Toggling a border for the visualization and its colour. Fine-tuning the exact location of the visualization on the report, based on its X \u0026amp; Y Position values. Each Visualization will also have a set of specific, unique options. We can see an example of how this looks by going back to KPI\u0026rsquo;s again and examining the top four options listed:\nIndicator: This area provides options on how to display the Indicator field value on the KPI. It is possible to round up figures to the thousands, millions and even trillions, and also to adjust the number of decimal places shown. Auto is the default option used and will generally assign the best rounding option, depending on the underlying value. Trend Axis: Here, it is possible to enable/disable the trend visual on the KPI. The example below demonstrates how this looks when disabled: Goals: With the Goal toggle, it is possible to remove the Goal figure underneath the Indicator value completely; the Distance toggle will remove the percentage distance value as well. It is possible to mix and match options here, as indicated in the example below: Color coding: The default RAG (Red, Amber, Green) traffic light system should be suitable for most situations but, if you have a specific branding requirement, it is possible to override the Good, Neutral and Bad colours with a custom Hex colour value. The Direction option also lets you reverse how the KPI works. So, for example, if raising more than 5 IT service requests would lead to a Bad outcome, any number underneath this would appear as Good instead. When it comes to this subject area for the exam, there is no need to study the whole range of formatting options available; however, a general awareness will hold you in good stead.\nExample: Creating a KPI The process involved in building out a KPI can be a lot more involved then you may suspect, based on the descriptions provided so far. Therefore, the example that follows is designed to show you the type of preparatory modelling steps that will be needed to put in place a working solution utilising KPIs. To follow through these steps, make sure you have connected Power BI up to the WideWorldImporters sample database and that the Sales.OrderLines table is within your Power BI data model:\nIn Power BI Desktop, on the Sales OrderLines table, use the New Column button to add on the following new calculated columns:\nGrossPrice = \u0026lsquo;Sales OrderLines\u0026rsquo;[UnitPrice] * \u0026lsquo;Sales OrderLines\u0026rsquo;[Quantity] NetPrice = \u0026lsquo;Sales OrderLines\u0026rsquo;[GrossPrice] + (\u0026lsquo;Sales OrderLines\u0026rsquo;[GrossPrice] * (\u0026lsquo;Sales OrderLines\u0026rsquo;[TaxRate] / 100)) OrderDate = RELATED(\u0026lsquo;Sales Orders\u0026rsquo;[OrderDate]) Next, create a Calculated Table via the New Table option, that uses the following DAX formula:\nSalesOrderAgg2016 = FILTER(SUMMARIZE(\u0026lsquo;Sales OrderLines\u0026rsquo;,\u0026lsquo;Sales OrderLines\u0026rsquo;[OrderDate], \u0026ldquo;Total Gross Price\u0026rdquo;, SUM(\u0026lsquo;Sales OrderLines\u0026rsquo;[GrossPrice]), \u0026ldquo;Total Net Price\u0026rdquo;, SUM(\u0026lsquo;Sales OrderLines\u0026rsquo;[NetPrice])), \u0026lsquo;Sales OrderLines\u0026rsquo;[OrderDate] \u0026gt;= DATE(2016, 1, 1) \u0026amp;\u0026amp; \u0026lsquo;Sales OrderLines\u0026rsquo;[OrderDate] \u0026lt;= DATE(2016, 12, 31)) Doing this will create a new table object that looks similar to the below, providing a total sum of the calculated columns created in step 1), grouped by OrderDate. I would also recommend, at this stage, to change the format of the new columns to your preferred Currency value: Add a new Calculated Column to the SalesOrderAgg2016 table to get the Month Name label from the OrderDate column:\nOrderDateMonthName = FORMAT(DATEVALUE(SalesOrderAgg2016[OrderDate]), \u0026ldquo;MMMM\u0026rdquo;) With the base data ready, we can now look at creating an Actual and Target Measure on the SalesOrderAgg2016 table, using the following DAX formulas:\nActual Total Net Price = SUM(SalesOrderAgg[Total Net Price])\nTarget Total Net Price = 5750000\nActual Total Gross Price = SUM(SalesOrderAgg[Total Gross Price])\nTarget Total Gross Price = 5500000\nOn the Report tab, under the Visualizations area, click on the KPI icon to add an empty KPI visualization to the report: Click on the newly created Visualization and, under the Fields area, drag and drop the appropriate SalesOrderAgg2016 fields into the wells indicated below: The visualization will then update accordingly, showing us the figure trend over each month and coloured red to indicate that this KPI is currently off target: To see how a KPI looks when ahead of target, we can repeat steps 5) and 6), but this time using the Net figures instead: Key Takeaways There are two principle visualization types available within Power BI to help track actual-to-target progress - KPIs and Gauges. KPIs provide a more visually unique means of a binary success/fail determination when tracking towards a target. It is also possible to use KPI\u0026rsquo;s to track variance over time via the Trend axis. The Indicator will typically be the result of some form of aggregation or Measure. Gauges provide a less visually distinctive, but non-binary, mechanism of viewing progress towards a target. Gauges support more potential field well values when compared with KPIs, nearly all of which are optional in some way. You can also manually define some of these values, for situations where your data model does not contain the required information. All visualizations within Power BI are modifiable from a display or formatting perspective. The same basic options will generally be supported - such as changing a font type or background colour - with more specific configuration properties available per unique visualization type. For example, a KPI visualization can be customised to hide the background Trend Axis entirely. All of these options are designed to give developers greater control over the look and feel of their reports and to mirror them as closely as possible to any potential branding requirement. When building out a solution designed to monitor progress within Power BI, the steps involved will typically be more in-depth than merely creating a new visualization. In most cases, there will be a requirement to bring together a lot of the other skills that have been discussed previously within this series - such as creating DAX formulas, modifying data within Power Query or bringing together different data sources into a single model. It is essential, therefore, not to underestimate the amount of time and effort involved in creating a practical solution that takes advantage of KPIs or Gauges. I hope that this weeks post has been a little easier to bear when compared with DAX. 🙂 In next weeks post, we will take a closer look at data hierarchies and how to apply these to visualizations within Power BI.\n","date":"2018-12-23T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-utilising-kpis-with-gauge-visualisations/","title":"Exam 70-778 Revision Notes: Utilising KPIs with Gauge Visualisations"},{"content":"Welcome to post number 6 in my series concerning Microsoft Exam 70-778. The series aims to provide a revision tool for those who are looking at taking the exam and to also provide an introduction into some of the fundamental concepts around Power BI. As alluded to previously on the blog, Power BI is increasingly a topic that Dynamics 365 Customer Engagement professionals need to grapple with, particularly if they wish to implement a reporting solution that helps to ensure the long-term success of their CRM deployments. Going for Exam 70-778 gives you an ideal opportunity to familiarise yourself with this exciting technology area.\nWe moved into the Modeling and Visualizing Data theme for the first time last week, and we now jump into a relatively big subject this week - Create calculated columns, calculated tables, and measures. The skill areas covered here are as follows:\nCreate DAX formulas for calculated columns, calculated tables, and measures; Use What If parameters\nTo follow on as part of the examples that follow, make sure you have downloaded the WideWorldImporters sample database and hooked up Power BI Desktop to the tables described in my first post in this series.\nNow, to begin this week\u0026rsquo;s post, we must first ask ourselves an essential question:\nWhat is DAX? Data Analysis Expressions, or DAX, has its roots in SQL Server Analysis Services (SSAS). First released as part of the PowerPivot for Excel 2010 Add-in, DAX has quickly become the preferred language to use when working with Excel PowerPivot, SSAS and - most importantly - Power BI. DAX bears a lot of similarity to standard Excel functions, but a critical differentiator is that DAX does not operate in the context of cells and is considered a strongly typed language (i.e. can lead to a higher risk of errors at runtime). A benefit of using DAX over PowerQuery/M is that the language handles a lot of data conversions implicitly. It also supports the usual list of data types associated with SQL type databases - Decimal/Whole Numbers, DateTime, Text, Boolean etc. A successful Power BI Reporting solution typically relies heavily on the building of complex DAX formulas, which are then complemented by the most appropriate visualisation at the report level.\nContext in DAX An important concept to grapple with DAX is that of context, and how this applies to the various formulas that you build out. There are two types of context:\nRow Context: Essentially meaning the \u0026ldquo;current row\u0026rdquo;, functions that operate in this manner will be processed for every single row on your dataset, outputting the result required each time. For example, the IF function works on a row context. Filter Context: Functions that support this will take into account any filters defined at the report level when performing the relevant calculation. Consequently, a lot of the work involved in contextually updating visuals is handled automatically by your DAX formulas, with no need to configure specific Measures/calculated columns as a workaround. It is essential to understand these two types of context and on how they can impact on each other (the \u0026ldquo;context transition\u0026rdquo;). More critically, it\u0026rsquo;s also crucial to be aware of when a DAX function ignores a context completely. For example, some aggregation functions completely ignore row context. This article on the SQL BI website is a great resource that discusses this whole topic in depth.\nUtilising DAX in Power BI The Modeling tab is your go-to area when working with the three types of DAX calculations within Power BI Desktop - Measures, Calculated Columns and Calculated Tables:\nThe sections that follow will discuss each of these in further detail, but there are some useful points to highlight that apply when using DAX generally:\nUnlike the M Query Editor, there is full Intellisense support provided for DAX within Power BI. Straightaway, this makes this a much more streamlined tool that should be gravitated towards automatically as a consequence. You should always use fully-qualified names that reference both the table object and field name that you are attempting to access. An example of how this should look can be seen below with the custom column EditedByDay on the WideWorldImporters Sales.OrderLines table: It is also possible to add comments to your DAX code, with three flavours on offer; this is the part where we distinguish between the C# and SQL Server developers 🙂 : Single Line Comments: // or -- Multi-Line Comments: /* and */ The screenshot below indicates how to utilise these as part of a DAX calculated column: - Let\u0026rsquo;s take a look now at the specific areas referenced above in greater detail:\nMeasures Measures are best thought of as being fixed (or scalar) values, typically aggregations based on simple or more complex underlying formulas. A straightforward example could be just a Count of all rows on a table e.g\nSalesOrderLinesCount = COUNTROWS(\u0026lsquo;Sales OrderLines\u0026rsquo;) Although Measures are compatible with any Visualization, the most common place to find them will be in a Card, as indicated below:\nWhen first working with Measures, an advantageous feature at our disposal is the New Quick Measure option, which provides a softer, GUI-focused approach to creating common types of Measure:\nBeyond this, the sky is the limit when working with DAX Measures, with a whole range of different functions that can be used in isolation or tandem with each other.\nCalculated Columns We saw how it is possible to perform simplistic custom/calculated columns using the Power Query Editor as part of an earlier blog post. The functionality offered here can prove useful for straightforward requirements - for example, if Column1 equals \u0026ldquo;AAA\u0026rdquo;, then TRUE, else FALSE - but may soon become insufficient if you have more advanced needs for your new column. In this scenario, DAX calculated columns could come to the rescue and give you the ability to more familiarly define your business logic, particularly for those with their Excel head on.\nA key consideration when working with DAX Calculated Columns is understanding the impact they have on performance; specifically, that they consume RAM and disk space within your report. You should be aware also of the concept of circular dependencies. It is possible to build highly complex formulas within DAX, that may reference other DAX columns that you have built out. As a consequence, attempting to modify any dependent DAX formulas could result in errors. In addition to this, trying to define multiple calculated columns that perform the same operation may also lead to circular dependencies. There is another excellent article on the SQL BI website that jumps into this whole subject area in greater detail and is worth a read.\nSimilar to Measures, pretty much every type of DAX function listed here (minus ones that return table values) is useful in some way when working with Calculated Columns.\nCalculated Tables The types of DAX functions discussed so far focus on returning a single value - either in the sense of a fixed, often aggregated, value as part of a Measure or a single value on a row-set with a Calculated Column. The next step up from that, and where DAX enters a whole new realm of usefulness, is the ability to define Calculated Tables based on DAX formulas. There are a LOT of options at your disposal here. As well as being able to derive Calculated Tables directly from other objects within your model (e.g. the DAX formula SalesOrderLinesDupe = \u0026lsquo;Sales OrderLines\u0026rsquo; would create a copy of the Sales OrderLines table), there is also a wide array of different functions available that, when used individually or in combination, can satisfy any particular requirement:\nThe FILTER function returns a new table object filtered from another, based on a specific value. The CALCULATETABLE function is the \u0026ldquo;big brother\u0026rdquo; of this particular function, with the ability to define multiple, potentially complex filter conditions.\ne.g. The following formula will return all Sales OrderLine records with a PickedQuantity value greater than 7. The ALL function returns all data from a table with underlying filters removed (i.e. filters that are defined using the options indicated in the screenshot below. It can be used to specify a whole table or single/multiple column(s). There are also variants of this, such as ALLEXCEPT (which will return all columns except the ones listed) and ALLNOBLANKROW (which removes any blank rows before returning the data). VALUES and DISTINCT return a list of distinct rows only. VALUES can only work with physical tables, whereas DISTINCT also works with table expressions.\ne.g. The following DAX formula returns 227 distinct Description values from the Sales OrderLines table: SUMMARIZE groups a table by one or more columns, adding new ones where necessary. The grouping is optional. Only rows that contain data will return.\ne.g. The follow DAX formula will return the total UnitPrice for each Sales OrderLines Description: ADDCOLUMNS returns an existing table with new columns defined, typically as DAX formulas. SELECTCOLUMNS behaves similarly, but you have to specify the columns to return, including any new ones.\ne.g. SalesOrderLinesWithCalcColumn = ADDCOLUMNS(\u0026lsquo;Sales OrderLines\u0026rsquo;, \u0026ldquo;IsEndOfMonth\u0026rdquo;, IF(DAY(\u0026lsquo;Sales OrderLines\u0026rsquo;[LastEditedBy]) \u0026gt;= 25, TRUE(), FALSE())) returns the Sales OrderLines table, with the example IsEndOfMonth column included as a new column based on a DAX formula. TOPN performs ranking based on conditions and returns the specified number of records based on the ranking criteria. Ties can also occur, meaning that the specified ranking number may not correlate to the actual number of rows returned.\ne.g. The following formula will return the Top 25 Sales OrderLines records by UnitPrice. Notice the number of records returned exceeds 25; this is because of multiple ties within the underlying data. There are a range of functions available to accommodate common table-join scenarios. CROSSJOIN allows for Cartesian product joins from multiple tables; NATURALINNERJOIN \u0026amp; NATURALLEFTOUTERJOIN perform self-explanatory joins, which seasoned SQL Server developers should have no trouble in guessing; and GENERATE \u0026amp; GENERATEALL allow for more bespoke table joins to take place.\ne.g. the following formula will perform an inner join of the Sales Orders and Sales OrderLines table and return the ID values from both as a new table object GENERATESERIES creates a single table with a list of numerical values. It requires the start \u0026amp; end number and the (optional) increment value.\ne.g. SeriesExample = GENERATESERIES(1, 150, 1) will generate a single column table object with 150 rows, numbered 1 to 150. CALENDAR generates a list of date/time values in a single table column object, based on the range specified. CALENDARAUTO behaves the same but creates a list of relevant date values after evaluating the whole model (e.g. if there is another table with dates between January 1st and November 31st 2018, then this would be the list of dates generated).\ne.g. Using CalendarAuto with the WideWorldImporters Sales Orders and Sales OrderLines tables will create date values from the 1st January 2013 through to the 31st November 2016. ROW generates a single row table with the column values specified, based on a key/value pairing.\ne.g. RowExample = ROW(\u0026ldquo;My DAX\u0026rdquo;, \u0026ldquo;Brings all the nerds to the yard\u0026rdquo;, \u0026ldquo;And they\u0026rsquo;re like\u0026rdquo;, \u0026ldquo;Do you wanna write Measures\u0026rdquo;, \u0026ldquo;Damn right, I want to write Measures\u0026rdquo;, \u0026ldquo;I can teach you, but you must learn M first\u0026rdquo;) produces the following table: UNION combines two or more tables vertically, retaining duplicates. Column names do not need to match, but the number of columns does. INTERSECT is similar to UNION but maintains only values that exist the same in both tables. Nested UNIONS are usable alongside this, but the specified order can impact the result. EXCEPT is similar to UNION and INTERSECT, but outputs rows that exist only in the first table, not the second one.\nDATATABLE allows the user to specify tables that contain manually entered data, similar to the Enter Data feature:\ne.g.: What If Parameters There may be times within Power BI where you need to provide some predictive modelling capability. For example, show how potential sales will look based on an increased margin of 25%. Or it could be, as developers, we wish to test the functionality of some of our DAX formulas by having the ability to see how potential ranges of values will affect our calculations. For either scenario, What If Parameters can provide a solution. By default, on creation, a slicer control is provided that can be placed anywhere on your Report. To create a What If Parameter:\nWithin Power BI Desktop, on the Modeling tab, select the New Parameter option: On the What-if parameter dialog box, enter the details as indicated below and press OK: This will then add the following components to your report:\nA Slicer control called TestWhatIfParameter A new Calculated Table called TestWhatIfParameter: Under the Visualizations tab, select the Gauge visualization to add it your report. Define the properties for the Gauge visual as follows:\nNow, when you adjust the value in the TestWhatIfParameter, the Gauge will update accordingly. For example, setting it to 75 will update the report as follows: This simple example does not do justice to the potential that this feature has, so I would recommend exploring it further yourself.\nKey Takeaways DAX is the primary formula language when working with datasets outside of Power Query. It includes, to date, more than 200 different types of functions that can assist in all sorts of data modelling. An important concept to grasp within DAX is context and, specifically, row context (formulas that calculate a result for each row in a dataset) and filter context (formulas that automatically apply any filtering carried out at report level). The sheer amount of DAX functions available makes it impossible to master and remember all of them, particularly when it comes to the exam. Your learning should, therefore, focus on learning the general syntax of DAX and the general types of functions available (aggregation, date/time etc.) There are three principal means of utilising DAX with Power BI: As Measures: These typically present a scalar value of some description, often an aggregation or a result of a complex formula. Using them in association with a Card visualization type is recommended, but this is not a strict requirement. As Calculated Columns: Similar to the options available within Power Query, Calculated Columns provide a dynamic and versatile means of adding new columns onto your datasets. Compared with the options available within Power Query and the complexity of the M language, DAX Calculated Columns might represent a more straightforward means of adding custom columns onto your datasets. As Calculated Tables: A powerful feature, mainly when used in conjunction with Calculated Columns, you have the ability here to create entirely new datasets within the model. These will typically derive from any existing datasets you have brought in from Power Query, but you also have functionality here to create Date tables, sequence numbers and manually defined datasets as well. What-if Parameters provide of means of testing DAX formulas, as well as allowing report users to perform predictive adjustments that can affect multiple visualizations on a report. DAX is a subject so vast that there are entire books devoted to it. Therefore, from an exam perspective, don\u0026rsquo;t worry too much about becoming a DAX master. In next weeks post, we\u0026rsquo;ll hopefully go down a few gears as we take a look at how to work with KPIs and how to apply them to visualizations.\n","date":"2018-12-16T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-using-dax-for-calculated-columns/","title":"Exam 70-778 Revision Notes: Using DAX for Calculated Columns"},{"content":"Another week, another Power BI post 🙂 This is the fifth post in my series focusing on Microsoft Exam 70-778, where I aim to provide a detailed overview and description of the skills needed to tackle the exam successfully. Last week\u0026rsquo;s post rounded off the opening theme, Consuming and Transforming Data By Using Power BI Desktop, as we focused on the options available to us to help tidy up data within the Power Query Editor. We now start to move outside of this area of the application with the Modelling and Visualising Data theme and, specifically, the following skill area:\nManage relationships; optimize models for reporting; manually type in data; use Power Query\nAs before, to follow on as part of any examples, make sure you have downloaded the WideWorldImporters sample database and hooked up Power BI Desktop to the tables described in my first post in this series.\nRelationships It will be a rare occurrence if you find yourself regularly working with a single table object within Power BI. The required data for most reporting situations will generally reside in different locations - whether within separate tables in a single SQL database or across different instances or applications. In most cases, there will be some field - or key - that will link records together and act as the bedrock when defining Power BI Relationships.\nFor those who are coming from a SQL or Dynamics 365 Customer Engagement background, I would expect that the concept of Relationships will present little difficulty in grasping. But, for those who are coming in cold, it is worth considering the following question - Why might you want to look at defining Relationships in the first place? The reasons will generally be specific to your business need, but can be generalised as follows:\nDatabase normalization creates a degree of complexity that can be difficult to unweave from a reporting standpoint. Customer records may exist in one table, Address details in another, their orders in a third\u0026hellip;you get the picture. Consequently, there is a need to bring this all together into a single/simplified view when developing reporting solutions; Relationships help to meet this objective. The great benefit of using Power BI is its ability to provide a consistent, contextual filtering experience across Reports. So, for example, when adjusting the range on a Date slicer, all other visualisations are automatically refreshed to only show data from the updated period. This wizardry is all achieved by Relationships and the cross-filtering rules you define for them. Power BI data models can grow in complexity over time. The defining of Relationships simplifies this to a degree and allows you to view all of your queries in a clean, visual manner. We can see an example of how a Power BI Relationship looks in the picture below:\nWe can see above two one-to-many (1:N) Relationship between the Sales.Customers and Sales.Invoices tables from the WideWorldImporters database. The matching key, in this case, is the CustomerID field.\nRelationships are created and managed in one of two ways:\nAutomatically when bringing data into Power BI Desktop the first time. Power BI will automatically detect and build any Relationships within a SQL Server-derived data source, based on any PRIMARY KEY/FOREIGN KEY CONSTRAINTs that exist already. The Select Related Tables option can be used to intelligently determine which tables can be brought in on this basis. By going to the Manage Relationships button on the Home tab within the main Power BI window (NOT the Power Query Editor). This option will allow you to manage and create new Relationships based on your requirements. An Autodetect\u0026hellip; option is also available, which behaves similarly to the data source auto-detection discussed previously. The example below shows the WideWorldImporters Relationships referenced earlier: The walkthrough exercise at the end of this post will go through the process of creating a Relationship from start to finish, but it is useful at this stage to dive into the more detailed concepts involving Relationships:\nThe following types of Relationships (or Cardinality) can be specified: One to Many (1:N) / Many to One (N:1) One to One (1:1) Many to Many (N:N) - This feature is currently in Preview and caution is advised when using it. Each Relationship requires that a Cross filter direction is specified, which will determine how filters cascade out to related tables. These are configurable as either Single (in the direction of your choosing - left to right or right to left) or Both. 1:N Relationships must always use Both. Only one Relationship is classifiable as Active between two tables at any given time. Despite this limitation, it is possible to define multiple Relationships, if required. It is essential to understand with this that only Active Relationships are utilisable when performing DAX column/measure calculation, which may lead to unintended results. You can get around this by either changing the Inactive Relationship to Active or by taking advantage the USERELATIONSHIP function to force DAX to pick your preferred Relationship. The Assume Referential integrity option is only available when you are using DirectQuery and, if enabled, underlying queries will attempt to use INNER JOINs as opposed to OUTER JOINs in any query folding. While this can speed things up, it may also lead to missing or incomplete results. Formatting Columns We saw in last week\u0026rsquo;s post some of the more functional data formatting options that the Power Query Editor provides. I say the word functional in the sense of meaning formatting that serves a specific, data quality need, as opposed to a styling requirement. Moving outside of the Power Query Editor and we can discover a whole host of additional styling formatting options on the Modelling tab:\nLet\u0026rsquo;s break all this down into a bit more detail:\nFormatting Data Type: Here, it is possible to override the data type for a particular column value. The options available are: Decimal Number Fixed decimal number Whole Number Date/Time Date Time Text True/False Binary Format: Depending on the Data Type value selected, the list of choices here will change accordingly. In most cases, such as for Text values, you will only have a single selection available, but your options for Date/Time and Currency values are considerably richer. For example: Date/Time values can be formatted in numerous different ways, both from a styling and locale perspective. This includes formatting dates in the good ol\u0026rsquo; British way, meaning that the ISO 8601 date value 2018-01-01 can appear as 01-01-2018, 1 January 2018 or even Monday, 1 January 2018. Currency values are modifiable to any global currency sign or shortcode. So, the value 50 is displayable as both £50 and 50 GBP for British Pound currency figures. $ Button: This is, in effect, an alternate way of changing the Currency value on a field, as described previously. % Button: Converts a value to a percentage. For example, the Whole Number value of 75 would appear as 75%. \u0026rsquo; Button: Adds a thousand separator to a number value. Therefore, 100000 becomes 100,000. .0/.00 Button: Adjusts the number of decimal places at the end of a numeric, up to a maximum limit of 15. Properties Data Category: This option, designed primarily for String values, allows you to indicate the type of data the field represents, with a specific focus towards mapping capabilities. For example, it is possible to mark a field as State or Province, to ensure that Power BI does not make an incorrect assumption when attempting to plot an address to a map. Some additional options here include: Latitude/Longitude to, again, ensure accurate plotting to a map visual. Classifying a field as containing either a Web URL or Image URL. Categorising a field as Barcode value. Default Summarization: When building out a report and adding fields to visualizations, Power BI assumes the preferred aggregation value to use. This will be pretty spot on in most cases - for example, Currency fields will default to a Sum aggregation. However, it is possible to override this option to use any of the following, easily guessable, alternatives: Don\u0026rsquo;t summarize. Sum Average Minimum Maximum Count Count (Distinct) Sort By Column There may be a requirement to perform sorting of column values, based on some predefined logic as opposed to merely alphabetical or numeric order. The best example to provide here would be for Month Names. Instead of sorting in alphabetical order, like so:\nApril\nAugust\nDecember\nFebruary\netc.\nWe would want to sort by logical Month order i.e.:\nJanuary\nFebruary\nMarch\nApril\netc.\nAnother scenario could be column sorting based on High, Medium and Low categorisation; all records with a value of High would need to appear first and the Medium second, as opposed to Low if sorted alphabetically.\nSort By Column gives us the flexibility to achieve these requirements, but it is crucial first to ensure that a proper sorting column resides within your query. Going into the Power Query Editor and modifying the Sales CustomerTransactions table enables us to add on a Month Number column for the TransactionDate field by going to the Add Column tab and selecting Date -\u0026gt; Month -\u0026gt; Month option:\nComing out of the Power Query Editor using the Close \u0026amp; Apply option, go into the Sales CustomerTransactions table within the Data tab and select the Sort By Column option, selecting the MonthNumber field as the Sort by Column:\nWe should now see when adding a simple Table visualization to a report, that the Month values derived from TransactionDate appear in correct sort order:\nThrough this example, you can hopefully see how straightforward it is to accommodate more unique sorting requirements with minimal effort.\nManual Data Entry Although Power BI assumes, for the most part, no requirement to define any manual data as part of your reports, there may be situations where this need arises. For example, on the WideWorldImporters Sales.CustomerTransactions table that has been built out through this series, you may want to define a manual list of all FullName values that can then be linked up to a slicer for filtering purposes. There are two ways this requirement can be met, with both options having their benefits and potential preference, based on how much you like to code 🙂 :\nThe Enter Data button on the Power Bi Desktop Home tab allows you to create a new Table object based on manual data entry. The table can have as many columns as you like and, what\u0026rsquo;s nifty is that it supports full copy + paste functionality from other applications. It is a VERY convenient feature to bear in mind: Using PowerQuery, it is also possible to define custom Tables populated with data. We saw an example of how to achieve this in last weeks post when creating some example data for cleansing, but there are some further options available to us: Simple lists of data can appear within brackets. For example, the following M code: let Source = { \u0026#34;This\u0026#34;, \u0026#34;Is\u0026#34;, \u0026#34;A\u0026#34;, \u0026#34;Test\u0026#34;} in Source Would generate the following data within Power Query: Records with multiple rows/fields can also be generated using the example code below: Table.FromRecords({[Did = \u0026#34;This\u0026#34;, You = \u0026#34;is\u0026#34;, Know = \u0026#34;Testing\u0026#34;]}) Which would appear as follows: Finally, as shown last week, it is possible to create a Table object, using the example code below: #table( {\u0026#34;Forename\u0026#34;, \u0026#34;Surname\u0026#34;, \u0026#34;Description\u0026#34;}, { {\u0026#34;JANE\u0026#34;,\u0026#34;smith\u0026#34;,\u0026#34; this describes the record\u0026#34;}, {\u0026#34;alan\u0026#34;, \u0026#34;JOHNSON\u0026#34;, \u0026#34; record description detected \u0026#34;}, {\u0026#34; MARK\u0026#34;, \u0026#34;CORRIGAN \u0026#34;,\u0026#34;another description\u0026#34;}, {\u0026#34;JANE\u0026#34;,\u0026#34;smith\u0026#34;,\u0026#34; this describes the record\u0026#34;} } ) The tools available here help to satisfy many potential ad-hoc requirements when using Power BI Desktop and when used in conjunction with other features described in this post and earlier in the series, start to become pretty powerful.\nExample: Creating Relationships In this very straightforward example, we will see how it is possible to create a Relationship manually from within Power BI Desktop, using the WideWorldImporters sample database as our connection source:\nIt will be necessary to disable to automatic creation of Relationships when importing data for this exercise. This is done by going to the Options -\u0026gt; Data Load screen and ensuring that the appropriate option has been unticked: Follow the instructions discussed in this post to connect to the WideWorldImporters database but, on this occasion, ensure that the following tables only are selected for import: Once the data loads into Power Bi, navigate to the Relationships tab and you should see the two table objects indicated below (you may need to resize them accordingly for all fields to display): Because of the setting change carried out in step 1, the underlying Relationship between both of these tables in SQL has not been detected and added. To fix this, click on the Manage Relationships button on the Home tab and then the New button to open the Create relationship window, as indicated below: Define the following settings for the new Relationship, as described and illustrated below: In the first drop-down box, select Sales Orders and verify that the OrderID column highlights itself. In the second drop-down box, select Sales OrderLines and verify that the OrderID column highlights itself. For Cardinality, ensure One to many (1:*) is selected. For Cross filter direction, ensure Both is selected. Ensure Make this relationship active is ticked. Press OK and then Done to create the Relationship. The Relationship window should refresh accordingly to indicate that a new Relationship exists between the two tables: Key Takeaways Relationships form the cornerstone of ensuring the long-term viability and scalability of a large data model. Assuming you are working with well-built out, existing data sources, Power BI will automatically detect and create Relationships for you. In situations where more granular control is required, these Relationships can be specified manually if needed. It is worth keeping in mind the following important features of Relationships: They support one-to-one (1:N), one-to-many (1:N) and many-to-one (N:1) cardinality, with many-to-many (N:N) currently in preview. Filter directions can be specified either one way or bi-directionally. Only one relationship can be active on a table at any given time. It is possible to sort columns using more highly tailored custom logic via the Sort By Column feature. The most common requirement for this generally involves the sorting of Month Names in date order but can be extended to cover other scenarios if required. To implement, you should ensure that your data has a numbered column to indicate the preferred sort order. Moving outside of the Power Query Editor presents us with more flexibility when it comes to formatting data to suit particular styling or locale requirements. While the majority of this functionality provides date/time and currency formatting options, for the most part, it is also possible to categorise data based on Location, the type of URL it is or on whether or not it represents a Barcode value; these options can assist Power BI when rendering certain types of visualizations. There may be ad-hoc requirements to add manually defined data into Power BI - for example, a list of values that need linking to a Slicer control. The Enter Data button is the \u0026ldquo;no-code\u0026rdquo; route to achieving this and supports the ability to copy \u0026amp; paste data from external sources. For more advanced scenarios, you also have at your disposal a range of M code functionality to create Lists, Records and Tables, which can be extended further as required. Hopefully, by now, you are starting to get a good feel for how Power BI works and also the expected focus areas for the exam. Next weeks post is going to be a biggie, as we jump head first into DAX formulas and how they can be used for calculated columns and Measures. We\u0026rsquo;ll also introduce the concept of What-if Parameters and how they work in practice.\n","date":"2018-12-09T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-create-and-optimise-data-models/","title":"Exam 70-778 Revision Notes: Create and Optimise Data Models"},{"content":"Welcome to the third post in my series on Microsoft Exam 70-778, where I aim to provide a detailed overview and description of the skills needed to tackle the exam successfully. We saw in the previous post how we could use the Power Query Editor to perform a multitude of different transformations against data sources; this will now be taken further, as we start to look at how to ensure optimal quality within our Power BI data models. The relevant skills for this area as follows:\nManage incomplete data; meet data quality requirements\nTo follow on as part of the examples below, make sure you have downloaded the WideWorldImporters sample database and hooked up Power BI Desktop to the tables described in my first post in this series. With all this done, it is crucial first to grapple a vital concept relating to the management of incomplete data, namely, the ability to\u0026hellip;\nFilter Data If you are consuming an entire SQL tables worth of data within the Power Query Editor, the size of your model can grow over time. In this scenario, it may be necessary to apply column-level filters to your data directly within Power Query, as opposed to merely dumping a cartload of irrelevant data in front of your users. Fortunately, the whole experience of filtering data should be a cakewalk, given its similarity with Microsoft Excel filters, as we can see below when attempting to filter the FullName field on the WideWorldImporters Sales.CustomerTransactions table:\nThis particular feature does have some differences compared with Excel, though:\nFor large datasets, only a subset of all total, distinct record values are returned. This fact is indicated above via the List may be incomplete warning sign. Clicking the Load more option will do exactly that, but may take some time to process. The range of filters available will differ depending on the data type of the column. For example, String data types will have filters such as Begins With..****., Does Not Begin With\u0026hellip;, whereas Dates are filterable based on Year, Quarter, Month etc. The Remove Empty option will do just that - remove any columns that have a blank or NULL value. As discussed on the Performing Data Transformations post, when you start combining filters with Parameters, it is possible to transform particularly unwieldy filtering solutions into more simplistic variants. Column Errors When it comes to adding custom columns as part of your Power Query transformations, there is always the potential for these to error, due to a misconfigured calculation or some other kind of unforeseen issue. When this occurs, the corresponding column value is flagged accordingly within the Power Query Editor and can be inspected further to determine the root cause of the issue. To demonstrate this, add on the following custom column using the following formula onto the Sales.CustomerTransactions table\n[AmountExcludingTax] * [Error]\nThe value for each row in the new column should resemble the following:\nWhen this issue occurs, it may be most prudent to first try and address the issue with your underlying calculation, ideally fixing it so that no error occurs at all. Where this is not possible, you can then look towards removing any rows that contain such a value. We will see how this can be done in the Adding/Removing Rows section later in this post.\nBlank Data On the flip side of field value errors is blank data. In most cases, when working with SQL data sources, this will rear its head when there are NULL values in your database. For example, take a look at below at the CreditLimit field on the WideWorldImporters Sales.Customers table:\nWhen these fields are fed through from Power Query and relied upon as part of DAX custom column/Measure creation, you may start to get some unintended consequences. For example, after filtering the same table above only to retain rows where the CreditLimit equals null, attempting to create a Measure that totals up all CreditLimit values results in the following when displayed as a Card visualisation:\nIf you, therefore, have a desire to perform additional aggregations or custom calculations on fields that contain blank/null values, then you should take the appropriate steps to either a) remove all rows that contain one of these two values or b) perform a Replace action on the column to ensure a proper, default value appears instead. For the CreditLimit field, this can be as simple as replacing all null values with 0:\nAdding/Removing Rows Often our data sources are not pristine clean from a data perspective - duplicate rows may be common, it could be that rows exist with completely blank or null values or your incoming data file could be a complete mess from a column header perspective. With this problem in mind, the Power Query Editor provides us with the functionality to keep or remove rows based on several different conditions:\nThe options granted here should be reasonably self-explanatory, but the list below contains some additional guidance if you need it:\nKeep/Remove Top Rows: Keeps or removes the top number of rows, in ascending order, based on the amount you specify. Keep/Remove Bottom Rows: Keeps or removes the bottom number of rows, in descending order, based on the number you specify. Keep Range of Rows: Keeps the number of rows specified based on the starting row number. For example, for a 50-row table, if a First row value of 1 and a Number of rows value of 10 is selected, then the first ten rows will be retained. Keep/Remove Duplicates: Based on the currently selected column(s), keeps or removes all rows with duplicate values. Keep/Remove Errors: Based on the currently selected column(s), keeps or removes all rows that have an Error value. Remove Blank Rows: Removes any row that has a blank or NULL value. Formatting Column Data Data from a live, production system, such as Dynamics 365 Customer Engagement, can sometimes be a complete mess from a readability perspective; incorrect casing and invalid characters are typically commonplace in this situation. Fortunately, there are a range of options at our disposal with the Power Query Editor, on the Transform tab:\nMost of these are self-explanatory, with the exception of the Trim and Clean options:\nTrim removes any leading/trailing whitespace characters from a string value. Clean detects and removes any non-printable characters from a string value. Although not technically a data cleansing options, there are some clear usage scenarios for the Add Prefix \u0026amp; Add Suffix options, such as creating unique reference code for each column value, based on the unique record number value (e.g. ABCD-1, ABCD-2 etc.).\nFormatting options for other column types are not available from within Power Query. So if, for example, you wished to format all date values in the format YYYY-MM-DD, you would have to move outside of the Power Query Editor to achieve this. The steps involved to accomplish this will be a topic for a future post.\nExample: Cleansing Data Having reviewed each of the possible cleansing options at our disposal, let\u0026rsquo;s now take a look at an example of how to cleanse a troublesome dataset:\nWithin the Power Query Editor, on the Home tab, select the New Source -\u0026gt; Blank Query option. This will create a new Query in the left-hand pane called Query1. Select Query1 and press F2 to allow you to rename it to CleanseExample. Right-click the CleanseExample query and select the Advanced Editor option: Within the Advanced Editor window, copy \u0026amp; paste the following code into the window:\n#table( {\u0026#34;Forename\u0026#34;, \u0026#34;Surname\u0026#34;, \u0026#34;Description\u0026#34;}, { {\u0026#34;JANE\u0026#34;,\u0026#34;smith\u0026#34;,\u0026#34; this describes the record\u0026#34;}, {\u0026#34;alan\u0026#34;, \u0026#34;JOHNSON\u0026#34;, \u0026#34; record description detected \u0026#34;}, {\u0026#34; MARK\u0026#34;, \u0026#34;CORRIGAN \u0026#34;,\u0026#34;another description\u0026#34;}, {\u0026#34;JANE\u0026#34;,\u0026#34;smith\u0026#34;,\u0026#34; this describes the record\u0026#34;} } ) It should resemble the below if done correctly:\n4. When you are ready, press the Done button. PowerQuery will then create a table object using the code specified, populating it with records, as indicated below:\nThere are three key issues with this data that need resolving: The inconsistent word casing on the Forename/Surname. Whitespacing on the Description and ForeName fields. Duplicate records. These issues are fixable by taking the following action:\nFor the casing issue, CTRL + left click to select the Forename \u0026amp; Surname fields, go to the Transform tab and select Format -\u0026gt; Capitalize Each Word. Your data will then be modified to resemble the below: For the whitespace issue, select the Forename \u0026amp; Description fields and, on the Transform tab, select Format -\u0026gt; Trim: Finally, to remove the duplicate record for Jane Smith, highlight the Forename \u0026amp; Surname fields, navigate to the Home tab and select Remove Rows -\u0026gt; Remove Duplicates. This will then leave us with three records, as illustrated below: As a final (optional) step, we can also look to clean up the Description field values by applying the Capitalize Each Word formatting option: Et voilà! We now have a tidy and clean table, ready for consumption within Power BI 🙂\nKey Takeaways Data can be filtered directly within Power Query, using Excel-like functionality to assist you in only returning the most relevant data in your queries. The data type of each field plays a particularly important part of this, as only specific filter options will be at your disposal if, for example, you are working with numeric data. From a data quality perspective, you typically will need to handle column values that contain one of two possible value types: Errors: This will usually occur as a result of a calculated column field not working correctly. The best solution will always be to address any issues with your calculated column, such as by using a conditional statement to return a default value. Blanks/NULLs: A common symptom when working with SQL derived data sources, your real problems with blank values start to appear when you attempt to implement DAX custom columns/Measures outside of the Power Query Editor. It is, therefore, recommended that these are dealt with via a Replace action, depending on your fields data types. For example, a number field with blank/NULL values should be replaced with 0. The Remove Rows option(s) can act as a quick way of getting rid of any Error or Blank/NULL rows and can also be utilised further to remove duplicates or a range of rows. In most cases, you will have similar options available to you with Keep Rows instead. There are a variety of formatting options available to us when working with text/string data types. These range from fixing capitalisation issues in data, through to removing whitespace/non-printable character sets and even the ability to prepend/append a new value. Data cleansing is a reasonably short subject area in the grander scheme of things, but the steps covered represent key stages towards building out a competent and trustworthy reporting solution. The next post in the series will discuss the options available to us in building out more complex and bespoke data models.\n","date":"2018-12-02T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-cleansing-data/","title":"Exam 70-778 Revision Notes: Cleansing Data"},{"content":"Welcome to the second post in my series concerning Microsoft Exam 70-778, where I hope to provide a detailed revision tool for those preparing to take this exam. Last week\u0026rsquo;s post introduced the fundamental notions behind connecting to data sources in Power BI Desktop, and we will lead on from this to look at the Perform transformations topic, which covers the following skills:\nDesign and implement basic and advanced transformations; apply business rules; change data format to support visualization\nLet\u0026rsquo;s jump straight in by welcoming the concept that is most relevant to all of this\u0026hellip;\nPower Query First introduced in Excel some years back, the Power Query M formula language is very much the engine underneath the hood of Power BI. It deals with everything from the retrieval of data through to the removal of rows, creation of new columns, filtering - basically, if it concerns data manipulation, Power Query can more than likely handle it. By right-clicking any Query within the Power Query Editor window and selecting Advanced Editor, you can view the syntax of the language in detail and edit it to your hearts contents. In the screenshot below, Power Query is connecting to the WideWorldImporters database and returning the table Application.PaymentMethods:\nNow, if you are coming into Power BI for the first time, the thought of having to learn a whole new programming language can be a little daunting. That is why the Power Query Editor is the ideal tool for beginners to carry out most (if not all) data transformation from within the Power BI interface. We will take a look at this topic in more detail shortly, but when it comes to working with Queries, it is worth mentioning the following pieces of functionality that we have at our disposal:\nThe Advanced Editor has some minimal syntax error detection built in, but nothing on the same par as IntelliSense. Triple checking your Queries is always recommended to avoid any errors when loading your data. Queries can be renamed at any time and be given detailed descriptions if required. This step is generally recommended to help users better understand the data they are interfacing with. Queries will remain actively loaded within the Power Query Editor, unless and until they are disabled explicitly by right-clicking on them and de-selecting the Enable load option. Queries with names in italics are disabled. It is possible to both Duplicate \u0026amp; Reference queries at any time. The Reference option is particularly useful if you need to create variants of a particular source query that filters on different values, for example. Regardless of whether the Query is loaded directly into the model or not, it can be duplicated or referenced without issue. It is possible also to create Parameter Queries, and even Custom Function Queries to, for example, perform highly specific transformation actions for each column value provided. Parameters will be discussed in greater detail later on, whereas Custom Functions are beyond the scope of the exam. Transforming Data: A Brief Overview The Transform and Add Column tabs within the Power Query Editor are your go-to destinations when it comes to finding out what you can do from a transformation perspective:\nWith the toolbox provided here, you can do things such as:\nPivot/Unpivot your data. Replace column values. Split data based on delimiters. Perform advanced mathematical calculations. Create new columns derived from date/time values, such as Month Name or time durations. Define new columns based on examples, specific calculations/conditions or from another column value. The example at the end of this post will cover some of these specific transformation steps in detail, showing you how to apply them straightforwardly in the Power Query Editor.\nMerging \u0026amp; Appending Queries In some cases, you are likely to be bringing in data that is similar or related in some way, and your principle requirement will be to bring this together into a more simplistic view. In this regard, the following two features are especially useful:\nMerging: This involves combining two queries horizontally. If you are familiar with SQL JOINs, then this is the same thing. You define your base table and table to merge, the key values to pair on and then, finally, your join type. You have the following options at your disposal here: Left Outer Right Outer Full Outer Inner Left Anti Right Anti Appending: Best used when you have queries with the same overall structure, this step involves combining Queries vertically. The number of Queries can be as many as you want and you have the option of either a) appending onto the existing Query or b) onto an entirely new one. It is also possible, but not recommended, to Append Queries with entirely different structures. Using Parameters Parameters are a great way to give you the ability to quickly and easily modify filters in one or more Queries. They are created from the Home tab by going to the Manage Parameters -\u0026gt; New Parameter option, as indicated below:\nThen, you will be able to define your Parameter. For example, to create a Parameter that will filter the PaymentMethodName field on the Application PaymentMethods Query, specify the settings as indicated below:\nA new Query for the Parameter will appear on the left-hand pane, like so:\nThen, go to the Application PaymentMethods, click on the button with the arrow and select Text Filters -\u0026gt; Equals\u0026hellip; to open the Filter Rows window. Make sure that equals is selected and, on the second dropdown box, select Parameter and then the newly created Parameter:\nPressing OK will then apply the appropriate filter. Any changes made to the selected Parameter value will update automatically to the filters you have defined. When it comes to working with many filters across multiple Queries, Parameters can take away a lot of the pressure involved.\nQuery Folding Where possible, when performing transformation steps within the Power Query Editor, Power BI will attempt to figure out the most optimal natural language to use when querying the data source and apply this accordingly. In most cases, this will only occur for SQL based data sources. In the example below, after right-clicking on the Applied Steps for the Sales.Invoice query and selecting View Native Query, we can view the underlying T-SQL query used:\nYou should, therefore, pay careful attention to the order in which you apply your steps to ensure that Query Folding takes place wherever possible. There is an excellent article on the MSSQLTips website that goes into greater detail on this whole subject.\nExample: Transforming Table Data Using Power Query Picking up from where we left off last time, we will now perform a range of different transformation actions against the Sales.CustomerTransactions table from the WideWorldImporters sample database. The steps that follow are designed to give you a flavour of the types of transformation activities that you can perform against your data:\nWithin Power BI Desktop, click on the Edit Queries button. The Power Query Editor window will open, listing all of the tables imported from the WideWorldImporters database. The Query Sales CustomerTransactions should already be selected but, if not, double-click on it to load the data from this data source into the main window: We can see in the main window that Power BI has automatically decided the best data types to use, based on the underlying SQL table schema. However, for all columns that relate to financial data (AmountExcludingTax, TaxAmount, TransactionAmount \u0026amp; OutstandingBalance), it will be more prudent to convert these into the most appropriate data type for currency values - Fixed decimal number. While holding down the CTRL key, left-click to select each of the columns mentioned above, right-click and select Change Type -\u0026gt; Fixed decimal number: Notice now that a $ symbol appears next to each of the chosen fields. You can also see, on the right-hand pane, underneath Applied Steps, a new Applied Step called Changed Type:\nAs the interface is used to modify the data, the appropriate Power Query M code transformation occurs behind the scenes. All Applied Steps are reversible, and this can be done by highlighting it and pressing the X icon. It can also be renamed by selecting it and pressing the F2 button.\nSeveral columns have been brought over from the Sales.CustomerTransactions table that will not be particularly useful to end users, specifically: CustomerID TransactionTypeID InvoiceID PaymentMethodID These can be removed by using the CTRL and left-click method, right-clicking any of the selected columns and selecting the Remove Columns option:\nBecause we have imported data alongside other related tables, there will be some special relationship column types at the end of our table. An example of this is the Application.People field. For this example, we need to extract the FullName value from this table and include it as part of our current query, by clicking on the two arrows icon on the top left of the field, ticking the FullName field and pressing OK. You can (optionally) tick/untick the box at the bottom that says Use original column as prefix, which does exactly what it says on the tin: At this point, you can also remove all other relationship columns pictured below using the same method outlined in step 3:\nThe TransactionAmount field provides us with a total of each orders total value, by adding together the AmountExcludingTax and TaxAmount fields. Let\u0026rsquo;s assume for a second that this field does not exist in our data; in this scenario, it is possible to create a Custom Column that performs the appropriate calculation and adds this onto our table as a new field. On the Add Column tab, the Custom Column button is one way of doing this. Then, define the appropriate formula to add together both field values, using familiar, Excel-like syntax: A common requirement for reporting is the ability to report sales based on the quarter of the year. To meet this requirement, Power Query can extract information like this from a Date field with relative ease. With the TransactionDate field selected, go to the Add Column tab and select Date -\u0026gt; Quarter -\u0026gt; Quarter of Year: A new column called TransactionQuarter will be created, which can be dragged and dropped next to the TransactionDate field to keep things tidy:\nAnother common sales reporting requirement is being able to rank a particular sale by category. Again, Power Query can come to the rescue with the Conditional Column feature: If you are familiar with if/else conditional logic flow, then the next part will be pretty straightforward 🙂 Within the Add Conditional Column window, populate each field with the field values indicated below and then press OK. You can use the Add rule button to include the additional Else if rows required for this field:\nOnce added, we can then view the field at the end of our table, working as expected:\nAt this point, our model is ready, but you could experiment further if you wish. Some additional transformation steps could include:\nExtracting the Month Name value from the TransactionDate field. Use the Split Column feature to extract the Forename and Surname from the FullName field, using Space as a delimiter. Filter the OutstandingBalance column to only include data where the value does not equal 0. Rename all columns and the Query itself to more human-readable names. Key Takeaways The Power Query M formula language is used to perform transformations to data once loaded into Power BI. Although it is possible to do this via code, Power BI allows us to define all of our required data changes from within the interface, without the need to write a single line of code. Each data source connected to represents itself as a Query within Power BI. There are many options at your disposal when working with Queries, such as renaming, merging, duplication and the ability to disable or reference as part of other Queries. There are wide-range of column transformations that can be applied, which are too numerous to mention. The Transform tab provides the best means of seeing what is available, with options ranging from formatting through to grouping and pivoting/unpivoting. New columns are addable via the Add Column tab. You can choose to base new columns on calculations, conditional logic, other column values or as a defined list of ascending numbers, which may be useful for indexing purposes. It is possible to merge or append queries together to suit your specific requirements. Merging involves the horizontal combination of Queries, whereas appending represents a vertical combination. Parameters can be used to help optimise any complex filtering requirements. Where possible, Power Query will attempt to use the most optimal query for your data source, based on the transformation steps you define. This action is known as Query Folding and, in most cases, SQL-derived data sources will support this option by default. In the next post, we will take a look at the options available to us from a data cleansing perspective and how it is possible to apply optimisation to a messy example dataset.\n","date":"2018-11-25T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-performing-data-transformations/","title":"Exam 70-778 Revision Notes: Performing Data Transformations"},{"content":"As discussed recently on the blog, I have been on a journey to try and attain the Microsoft Certified Solutions Associate Certification in BI Reporting. I was very fortunate to overcome the final hurdle of this task by passing Exam 70-778: Analyzing and Visualizing Data with Microsoft Power BI the other day. I enjoyed the opportunity to dive deeper into the world of Business Intelligence, particularly given the enhanced role Power BI has within the Business Applications space today. With this in mind, and in the hopes of encouraging others, today\u0026rsquo;s post is the first in a new series of revision notes for Exam 70-778. I hope that you find this, and all future posts, useful as either a revision tool or as an introduction into the world of Power BI.\nThe first skill area of the exam is all around how to Import from data sources, as described on the exam specification:\nConnect to and import from databases, files, and folders; connect to Microsoft SQL Azure, Big Data, SQL Server Analysis Services (SSAS), and Power Query; import supported file types; import from other Excel workbooks; link to data from other sources\nTo begin with, I will provide a detailed overview of the topic areas covered above, before jumping into an example of how to import data into Power BI.\nSupported Data Sources The great benefit of Power BI is its huge list of supported connectors, which are integrated neatly within the application itself. The list of all possible data sources changes on a monthly basis, and it is impossible to go into detail on each one. Suffice to say; you should at least be familiar with the following data sources:\nSQL Server (on-premise \u0026amp; Azure) SQL Server Analysis Services A wide range of vendor-specific Relational Database Management Systems (RDBMS\u0026rsquo;s), such as Oracle, MySQL, PostgreSQL, SAP Hana Any data source that supports Open Database Connectivity (ODBC) or Object Linking and Embedding, Database (OLEDB). The following flat file types: Excel (.xlsx) Text (.txt) Comma Separated Value documents (.csv) Extensible Markup Language (.xml) JavaScript Object Notation (.json) Web sources, such as Web pages or OData Feeds Some RDBMS vendor solutions have a requirement to install additional software, which will enable you to interact with that particular data source. You should check the relevant documentation for each vendor to verify any specific requirements.\nPower BI also supports a wide range of Microsoft proprietary and non-proprietary applications, such as Dynamics 365 Customer Engagement, SharePoint, Google Analytics \u0026amp; SalesForce. If you are feeling particularly technical, then you can also use the Blank Query option to, in theory, connect to any data source of your choosing or even go as far as building custom connectors yourself to interact with a specific application.\nBulk File Loading As well as supporting connections to single flat files, it is also possible to interact with multiple files existing in the same location. This feature can be useful if, for example, there is a requirement to process hundreds of .csv files with different data, but the same overall structure. The supported list of bulk file locations are:\nWindows file system folder SharePoint document folder Azure Blob Storage Azure Data Lake Storage When loading multiple files into Power BI, you not only can read the contents of each file but can also access file-level metadata, as indicated below:\nImport vs DirectQuery An important design decision when working with data sources concerns the data connectivity mode to be used. Your final choice will generally fall into one of two options:\nImport: When connecting to your data source, Power BI takes a copy of all data and stores it within your report. By implication, this places additional pressure on your local machines disk space and memory consumption. Import is the default option for most data sources and, to ensure that your data remains consistently up to date when deployed to the Power BI service, you have the opportunity of defining your data refresh frequency - 8 times a day for Power BI Professional and 48 times a day for Power BI Premium subscriptions. Import is the most sensible option to choose when there is no requirement for regular refreshing of your data sources or if performance concerns arise when using\u0026hellip; DirectQuery: Instead of taking a snapshot of the data, Power BI will read the data at source and store only the schema of the data within the model. At the time of writing this post, only a select number of mostly SQL based data sources are compatible with this feature. DirectQuery is your best choice when there is a need to keep reports continually up to date, and when your target data source is sufficiently beefed up to handle frequent requests. It\u0026rsquo;s also worth bearing in mind the following points when evaluating DirectQuery: DirectQuery only supports a single data source connection for the entire model, with no option of defining additional sources. While traditionally true, the release of composite models for DirectQuery removes this much-loathed limitation. There are limitations when it comes to data transformation options, especially for non-Microsoft data sources. Some query types will be unsupported. For data modelling using DAX, there are some crucial limitations. For example, Measures that use the SUMX \u0026amp; PATH functions (or their related counterparts) are not allowed. You should also be aware of a third option - Live Connection - which behaves similar to DirectQuery but is for SQL Server Analysis Services only. This option has the following limitations:\nNot possible to define relationships No possibility to transform data from within Power BI. Data modelling options, except for Measure creation, are almost non-existent. Importing Excel Workbooks There are some aspects of working with Excel documents in Power BI that are worth further consideration. You mostly have two options at your disposal to consume Excel workbooks:\nImport Data: Similar to working with any other flat file source, data within each of the following Excel objects is importable into Power BI: Tables Sheets Ranges You can see below how this looks for a file containing four worksheets:\nImport workbook contents: If you have built out a complex spreadsheet that utilises the full range of features available in the Excel Data Model, then it is also possible to import these into Power BI \u0026ldquo;as-is\u0026rdquo;. The following Excel Data Model features are exportable in this manner: Power Query queries Power Pivot Data Models Power View Worksheets (Most) Power View visuals; where a visual is unsupported in Power BI, an error appears on the appropriate visual. Example: Importing SQL Server Database Data What follows now is a typical data connection exercise in Power BI Desktop, which involves connecting to a SQL Server database. The experience described here is mostly similar for other data sources and, therefore, represents an optimal example to familiarise yourself with connecting to data sources in the application:\nLaunch Power Bi Desktop and, on the splash screen, select the Get data link on the left-hand pane: On the Get Data window, choose Database on the left-hand list, select SQL Server database and then press the Connect button: You will now be prompted to provide the following details:\nServer: This will be either the Fully Qualified Domain Name (FQDN) of the computer with a default SQL Server instance or the computer name and named instance name (e.g. MYSQLSERVER/MYSQLSERVERINSTANCE). In the example below, we are connecting to a default SQL Server instance on the computer JG-LAB-SQL Database: If you already know the name of the database you want to access, you can type this here; otherwise, leave blank. In this example, we are connecting to the WideWorldImporters sample database. Data Connectivity mode: See the section Import vs DirectQuery above for further details. For this example, select the Import setting: There are also several additional options that are definable in the Advanced options area:\nCommand timeout in minutes: Tells Power BI how long to wait before throwing an error due to connection issues. SQL statement: Specify here a pre-compiled SQL statement that will return the objects/datasets that you require. This option can be useful if you wish to reduce the complexity of your model within Power BI or if there is a requirement to return data from a stored procedure. Include relationship columns: Enabling this setting will return a single column for each defined relationship which, when expanded, gives you the ability to add related column fields onto your table object. Navigate using full hierarchy: Enabling this will allow you to navigate through the database hierarchy using schema object names. In most cases, this should remain disabled, unless there a specified schema names in your dataset (like Application, Sales, Purchasing etc. in the WideWorldImporters database). Enable SQL Server Failover support: If enabled, then Power BI will take advantage of any failover capability setup on your SQL Server instance, re-routing requests to the appropriate location where necessary. Illustrated below are some example settings for all of the above. For this walkthrough, leave all of these fields blank and then press OK to continue.\nThe Navigator window will appear, which will enable you to select the Tables or Views that you wish to work within the model. Selecting any of the objects listed will load a preview in the right-hand window, allowing you to see a \u0026ldquo;sneak peek\u0026rdquo; of the schema and the underlying data. Tick the object Sales.CustomerTransactions and then press the Select Related Tables button; all other Tables that have a relationship with the Sales.CustomerTransactions are then automatically included. Press Load when you are ready to import all selected table objects into Power BI. After a few moments, the Load window will appear and update accordingly as each table object gets processed by Power BI (exact times may vary, depending on the remote server/local machines capabilities). Eventually, when the window closes, you will see on the right-hand pane that all table objects have been loaded into Power BI and are ready to use for building out visualizations: At this stage, you would then look at loading up your imported objects into Power Query for fine-tuning. But that\u0026rsquo;s a topic for the next post 🙂\nKey Takeaways Power BI supports a broad range of database systems, flat file, folder, application and custom data sources. While it is impossible to memorise each data source, you should at least broadly familiarise yourself with the different types at our disposal. A crucial decision for many data sources relates to the choice of either Importing a data source in its entirety or in taking advantage of DirectQuery functionality instead (if available). Both routes have their own defined set of benefits and disadvantages. DirectQuery is worth consideration if there is a need to keep data regularly refreshed and you have no requirement to work with multiple data sources as part of your solution. Live Connection is a specific data connectivity option available for SQL Server Analysis Services. It behaves similarly to DirectQuery. It is possible to import an existing Excel BI solution into Power BI with minimal effort, alongside the ability to import standard worksheet data in the same manner as other flat file types. Look out for my next post in this series, where I will take a look at the range of transformation options available to us in Power BI, and work through some examples applied against the tables listed above.\n","date":"2018-11-18T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/exam-70-778-revision-notes-importing-from-data-sources/","title":"Exam 70-778 Revision Notes: Importing from Data Sources"},{"content":"I don\u0026rsquo;t typically stray too far from Microsoft technology areas as part of this blog, but having experienced this particular issue at the coalface and, being acutely aware of the popularity of the WordPress platform for many bloggers, I thought I\u0026rsquo;d do a specific post to help spread awareness. For those who are in a hurry\u0026hellip;\nTL;DR VERSION: IF YOU ARE USING THE WP GDPR COMPLIANCE PLUGIN ON YOUR WORDPRESS WEBSITE, UPDATE IT ASAP AND CHECK YOUR WORDPRESS INSTANCE FOR ANY NEW/SUSPICIOUS USER ACCOUNTS; IF EXISTING, THEN YOUR WEBSITE HAS BEEN HACKED. IMMEDIATELY TAKE YOUR SITE OFFLINE AND RESTORE FROM BACKUP, REMOVING AND THEN REINSTALLING THE ABOVE PLUGIN MANUALLY.\nWhen it comes to using WordPress as your blogging platform of choice, the journey from conception to fully working blog can be relatively smooth. The ease of this journey is due, in no small part, to the vast variety of custom extensions - Plugins - available to end-users. These can help to overcome common requirements, such as adding Header/Footer scripts to your website, integrating your website with tools such as Google reCAPTCHA and even to allow you to transform WordPress into a fully-featured e-commerce site. The high degree of personal autonomy this can place in your hands when building out your web presence is truly staggering, and there is no fault on the part of the WordPress project for its regular performance, feature and security release cycles. All of this has meant that the product has grown in popularity and adoption over time.\nRegrettably, the applications greatest strength is also its critical weakness point. WordPress is by far the most highly targeted application on the web today by hackers or malicious users. The latest CVE database result for the Content Management System (CMS) proves this point rather definitively but does not explain one of the most common reasons why WordPress is such a major target - namely, that most WordPress deployments are not subject to regular patching cycles. Plugins are by and large more susceptible to this, and any organisation which does not implement a monthly patching cycle for their WordPress site is significantly heightening their risk of being attacked. Even with all of this in place, you are not immune, as what follows demonstrates rather clearly:\nOn the 6th of November, a plugin designed to assist administrators in meeting their requirements under GDPR vanished from the WordPress Plugin store due to a \u0026ldquo;security flaw\u0026rdquo;. The developers deserve full credit and recognition here - within a small space of time, they had released a new version of the plugin with the flaw addressed - but hackers were quick on the ball with this particular vulnerability. On the afternoon of Thursday 8th November, I was alerted to the following actions which were carried out on numerous WordPress websites that I have responsibility for looking after:\nThe WordPress site setting Anyone can register setting was forcibly enabled, having been disabled previously. Administrator became the default role for all new user accounts, having been set to Subscriber previously. Next, a new user account - with a name matching or similar to \u0026ldquo;trollherten\u0026rdquo; - was created, containing full administrator privileges. Depending on your WordPress site configuration, an email was then sent to an email address, exposing the full details of your website URL and giving the attacker the ability to login into your site. From this point forward, the attacker has the keys to the kingdom and can do anything they want on your WordPress website - including, but not limited to, blocking access for other users, installing malicious codes/themes or accessing/downloading the entire contents of the site. The success of the attack lies in its rapid targeting, given the very brief window between the disclosure of the plugin flaw and the timing of the attack, and the relative straightforwardness of automating all of the required steps outlined above. For those who are interested in finding out more about the technical details of the attack, then WordFence has published a great article that goes into further detail on this subject.\nSo what should I do if my WordPress site is using this plugin or there is evidence of a hacking attempt? Here is my suggested action list, in priority order, for immediate action:\nTake your website offline, either by switching off your web server or, if you are using Azure App Service, you have some nifty options at your disposal to restrict access to your site to trusted hosts only. Restore your website from the last, good backup. Update the WP GDPR Compliance plugin to the latest version. As a precaution, change the credentials for all of the following on the website: User Accounts Web Server FTP Any linked/related service to your site that stores privileged information, such as a password, authorisation key etc. Review the following points and put in the appropriate controls, where necessary, to mitigate the risk of a future attack: Patching Cycle for WordPress, Plugins \u0026amp; Themes: You should ideally be carrying out regular patching of all of these components, at least once per month. There are also plugins available that can email you when a new update is available which, in this particular scenario, would have helped to more speedily identify the faulty plugin. Document your Plugins/Themes: You should have a full list of all plugins deployed on your WordPress website(s) stored somewhere, which then forms the basis of regular reviews. Any plugin that has a published vulnerability that has not been addressed by the developer should be removed from your website immediately. Restrict access to the WordPress Admin Centre: .htaccess rules for Apache or web.config changes for IIS can restrict specific URLs on a site to an approved list of hosts. This way, you can ensure that even if an exploit like the one described in this post takes place, the attacker will be restricted when trying to login into your WordPress backend. Review Backup Schedule: Typically, I\u0026rsquo;ve found that incidents like this can immediately demonstrate flaws in any backup procedure that is in place for a website - either in not being regular enough or, in the worst case, not taking place at all. You should ideally be performing daily backups of your WordPress website(s). Again, Azure makes this particularly easy to implement, but you can also take advantage of services such as VaultPress, which take all the hassle out of this for a small monthly price. Conclusions or Wot I Think Attacks of the nature described in this post are an unfortunate byproduct of the internet age and, regrettably, some of the evidence relating to this particular attack does, unfortunately, show that individuals and small businesses are the unfortunate casualties in today\u0026rsquo;s virtual conflicts on the world stage. Constant vigilance is the only best defence that you can have, more so given the speedy exploitation of this particular flaw. And, there has to be a frank admission that attacks like this are not 100% preventable; all necessary attention, therefore, should be drawn towards risk reduction, with the ultimate aim being to put in place as many steps possible to avoid an obvious target from being painted on your back. I hope that this post has been useful in making you are aware of this issue (if you weren\u0026rsquo;t already) and in offering some practical tips on how to resolve.\n","date":"2018-11-11T00:00:00Z","permalink":"/wp-gdpr-compliance-plugin-vulnerability-check-your-wordpress-site-now/","title":"WP GDPR Compliance Plugin Vulnerability: Check your WordPress Site NOW!!"},{"content":"The life of a Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE) professional is one of continual learning across many different technology areas within the core \u0026ldquo;stack\u0026rdquo; of the Business Applications platform. Microsoft has clarified this in no uncertain terms recently via the launch of the Power Platform offering, making it clear that cross-skilling across the various services associated with the Common Data Service is no longer an optional requirement, should you wish to build out a comprehensive business solution. I would not be surprised in the slightest if we find ourselves in a situation where the standard SSRS, Chart and Dashboarding options available within CRM/D365CE become deprecated soon, and Power BI becomes the preferred option for any reporting requirements involving the application. With this in mind, knowledge of Power BI becomes a critical requirement when developing and managing these applications, even more so when you consider how it is undoubtedly a core part of Microsoft\u0026rsquo;s product lineup; epitomised most clearly by the release of the Microsoft Certified Solutions Architect certification in BI Reporting earlier this year.\nI have been doing a lot of hands-on and strategic work with Power BI this past year, a product for which I have a lot of affection and which has numerous business uses. As a consequence, I am in the process of going through the motions to attain the BI Reporting MCSA, having recently passed Exam 70-779: Analyzing and Visualizing Data with Microsoft Excel. As part of this week\u0026rsquo;s post, I wanted to share some general, non-NDA breaching advice for those who are contemplating going for the exam. I hope you find it useful 🙂\nPower BI Experience is Relevant For an exam focused purely on the Excel sides of things, there are a lot of areas tested that have a significant amount of crossover with Power BI, such as:\nConnecting to data sources via Power Query in Excel, an experience which is an almost carbon copy of working with Power Query within Power BI. Although working with the Excel Data Model, for me at least, represented a significant learning curve when revising, it does have a lot of cross-functionality with Power BI, specifically when it comes to how DAX fits into the whole equation. Power BI is even a tested component for this exam. You should, therefore, expect to know how to upload Excel workbooks/Data Models into Power BI and be thoroughly familiar with the Power BI Publisher for Excel. Any previous knowledge around working with Power BI is going to give you a jet boost when it comes to tackling this exam, but do not take this for granted. There are some significant differences between both sets of products (epitomised by the fact that Excel and Power BI, in theory, address two distinctly different business requirements), and this is something that you will need to understand and potentially identify during the exam. But specific, detailed knowledge of some of the inner workings of Power BI is not going to be a disadvantage to you.\nLearn *a lot* of DAX DAX, or Data Analysis Expressions, are so important for this exam, and also for 70-778 as well. While it will not necessarily be required for you to memorise every single DAX expression available to pass the exam (although you are welcome to try!), you should be in a position to recognise the structure of the more common DAX functions available. You ideal DAX study areas before the exam may include, but is not limited to:\nAll aggregation/statistical functions - SUM, COUNT, MIN, MAX etc. Logical functions Date/Time functions Table object functions - CALENDAR, CALCULATE, FILTER, RELATED etc. A focus, in particular, should be driven towards the syntax of these functions, to the extent that you can memorise example usage scenarios involving them.\nGet the exam book As with all exams, Microsoft has released an accompanying book that is a handy revision guide and reference point for self-study. On balance, I feel this is one of the better exam reference books that I have come across, but beware of the occasional errata and, given the frequency of changes these days thanks to the regular Office 365 release cycle, be sure to supplement your learning with any proper online cross-checking.\nSetup a dedicated lab environment This task can be accomplished alongside working through the exercises in the exam book referenced above but, as with any exam, hands-on experience using the product is the best way of getting a passing grade. Download a copy of SQL Server Developer edition, restore one of the sample databases made available by Microsoft, get a copy of Excel 2016 and - hey presto! - you now have a working lab environment \u0026amp; dataset that you can interact with to your heart\u0026rsquo;s content.\nPivot yourself toward greater Excel knowledge Almost a quarter of the exam tests candidates on the broad range of PivotTable/PivotChart visualisations made available within Excel. With this in mind, some very detailed, specific knowledge is required in each of the following areas to stand a good chance of a passing grade:\nPivotTables: How they are structured, how to modify the displaying of Totals/Subtotals, changing their layout configuration, filtering (Slicers, Timelines etc.), auto-refresh options, aggregations/summarising data and the difference between Implicit and Explicit Measures. PivotCharts: What chart types are available in previous and newer versions of Excel (and which aren\u0026rsquo;t), understanding the ideal usage scenario for each chart type, understanding the different variants available for each chart types, understanding the structure of a chart (Legend, Axis etc.), chart filtering and formatting options available for each chart. Check out the relevant edX course As a revision tool, I found the following edX course of great assistance and free of charge to work through:\nAnalyzing and Visualizing Data with Excel\nThe course syllable mirrors itself firmly to the skills measured for the exam and represents a useful visual tool for self-study or as a means of quickly filling any knowledge gaps.\nConclusions or Wot I Think It is always essential, within the IT space, to keep one eye over the garden fence to see what is happening in related technology areas. This simple action of \u0026ldquo;keeping up with the Joneses\u0026rdquo; is to ensure no surprises down the road and to ensure that you can keep your skills relevant for the here and now. In case you didn\u0026rsquo;t realise already, Power BI is very much one of those things that traditional reporting analysts and CRM/D365CE professionals should be contemplating working with, now or in the immediate future. As well as being a dream to work with, it affords you the best opportunity to implement a reporting solution that will both excite and innovate end users. For me, it has allowed me to develop client relationships further once putting the solution in place, as users increasingly ask us to bring in other data sources into the solution. Whereas typically, this may have resulted in a protracted and costly development cycle to implement, Power BI takes all the hassle out of this and lets us instead focus on creating the most engaging range of visualisations possible for the data in question. I would strongly urge any CRM/D365CE professional to start learning about Power BI when they can and, as the next logical step, look to go for the BI Reporting MCSA.\n","date":"2018-11-04T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/my-thoughts-on-exam-70-779-analyzing-and-visualizing-data-with-microsoft-excel/","title":"My thoughts on Exam 70-779: Analyzing and Visualizing Data with Microsoft Excel"},{"content":"The whole concept of audio conferencing - the ability for a diverse range of participants to dial into a central location for a meeting - is hardly a new concept for the 21st century. Its prevalence, however, has undoubtedly grown sharply in the last 15-20 years; to the point now where, to use an analogy, it very much feels like a DVD when compared to full video conferencing, à la Blu-Ray. When you also consider the widening proliferation of remote workers, globalisation and the meteoric rise of cloud computing, businesses suddenly find themselves having to find answers to the following questions:\nHow can I enable my colleagues to straightforwardly and dynamically collaborate across vast distances? How do I address the challenges of implementing a scalable solution that meets any regulatory or contractual requirements for my organisation? What is the most cost-effective route to ensuring I can have a genuinely international audio-conferencing experience? How do I identify a solution that users can easily adopt, without introducing complex training requirements? These questions are just a flavour of the sorts of things that organisations should be thinking about when identifying a suitable audio conferencing solution. And there are a lot of great products on the market that address these needs - GoToMeeting or join.me represent natural choices for specific scenarios. But to provide a genuinely unified experience for existing IT deployments that have a reliance on Skype for Business/Microsoft Teams, the audio conferencing add-on for Office 365 (also referred to as Skype for Business Online Audio Conferencing) may represent a more prudent choice. It ticks all of the boxes for the questions above, ensuring that users can continue utilising other tools they know and use every day - such as Microsoft Outlook and Office 365. Admittedly, though, the solution is undoubtedly more geared up for Enterprise deployments as opposed to utilisation by SMBs. It may, therefore, become too unwieldy a solution in the hands of smaller organisations.\nI was recently involved in implementing an audio conferencing solution for Skype for Business Online, to satisfy the requirement for international dialling alluded to earlier. Having attended many audio conferences that utilise the service previously, I was familiar with the following experience when dialling in and - perhaps naively - assumed this to be the default configuration:\nUser dials in and enters the meeting ID number. An audio cue is played for the conference leader, asking them to enter their leader code (this is optional). The user is asked to record their name and then press * The meeting then starts automatically. For most internal audio conferences (and even external ones), this process works well, mainly when, for example, the person organising the meeting is doing so on behalf of someone else, and is unlikely to be dialling in themselves. However, I was surprised to learn that the actual, default experience is a little different:\nUser dials in and enters the meeting ID number. An audio cue is played for the conference leader, asking them to enter their leader code (this is optional). The user is asked to record their name and then press * If the leader has not yet dialled in, all other attendees sit in the lobby. The call does not start until the leader joins. The issue does not relate to how the meeting organiser has configured their meeting in Outlook, regardless of which setting is chosen in Outlook in the These people don\u0026rsquo;t have to wait in the lobby drop-down box.\nAfter some fruitless searching online, I eventually came across the following article, which clarified things for me:\nStart an Audio Conference over the phone without a PIN in Skype for Business Online\nAs a tenant-level configuration, therefore, there is a two-stage process involved to get this reconfigured for existing Skype for Business Online Audio Conferencing deployments:\nSet the AllowPSTNOnlyMeetingsByDefault setting to true on the tenant via a PowerShell cmdlet. Configure the AllowPSTNONLYMeetingsByDefault setting to true for every user setup for Audio Conferencing, either within the Skype for Business Administration Centre or via PowerShell. The second process could be incredibly long-winded to achieve via the Administration Centre route, as you have to go into each user\u0026rsquo;s audio conferencing settings and toggle the appropriate control, as indicated below:\nFor a significantly larger deployment, this could easily result in carpal tunnel syndrome and loss of sanity 😧. Fortunately, PowerShell can take away some of the woes involved as part of this. By logging into the Skype for Business Online administration centre via this route, it is possible to both enable the AllowPSTNOnlyMeetingsByDefault setting on a tenant level and also for all users who currently have the setting disabled. The complete script to carry out these steps is below:\n#Standard login for S4B Online Import-Module SkypeOnlineConnector $userCredential = Get-Credential $sfbSession = New-CsOnlineSession -Credential $userCredential Import-PSSession $sfbSession #Login script for MFA enabled accounts Import-Module SkypeOnlineConnector $sfbSession = New-CsOnlineSession Import-PSSession $sfbSession #Enable dial in without leader at the tenant level - will only apply to new users moving forward Set-CsOnlineDialInConferencingTenantSettings -AllowPSTNOnlyMeetingsByDefault $true #Get all current PSTN users $userIDs = Get-CsOnlineDialInConferencingUserInfo | Where Provider -eq \u0026#34;Microsoft\u0026#34; | Select DisplayName, Identity $userIDs | ForEach-Object -Process { #Then, check whether the AllowPstnOnlyMeetings is false $identity = $_.Identity $user = Get-CsOnlineDialInConferencingUser -Identity $identity.RawIdentity Write-Host $_.DisplayName\u0026#34;AllowPstnOnlyMeetings value equals\u0026#34;$user.AllowPstnOnlyMeetings if ($user.AllowPstnOnlyMeetings -eq $false) { #If so, then enable Set-CsOnlineDialInConferencingUser -Identity $identity.RawIdentity -AllowPSTNOnlyMeetings $true Write-Host $_.DisplayName\u0026#34;AllowPstnOnlyMeetings value changed to true!\u0026#34; } else { Write-Host \u0026#34;No action required for user account\u0026#34;$_.DisplayName } } Some notes/comments before you execute it in your environment:\nYou should comment out the appropriate authentication snippet that is not appropriate for your situation, depending on whether you have enabled Multi-Factor Authentication for your user account. Somewhat annoyingly, there is no way (I found) to extract a unique enough identifier that can be used with the Set-CsOnlineDialInConferencingUser cmdlet when obtaining the details of the user via the Get-CsOnlineDialInConferencingUser cmdlet. This is why the script first retrieves the complete LDAP string using the Get-CsOnlineDialInConferencingUserInfo. Convoluted I know, but it ensures that the script can work correctly and avoids any issues that may arise from duplicate Display Names on the Office 365 tenant. All being well, with very little modification, the above code can be utilised to enforce a setting across the board or for a specific subset of users, if required. It does seem strange that the option is turned off by default, but there are understandable reasons why it may be desirable to curate the whole meeting experience for attendees. If you are considering rolling out Skype for Business Online Audio Conferencing in your Office 365 tenant in the near future, then this represents one of many considerations that you will have to take when it comes to the deployment. You should, therefore, think carefully and consult with your end-users to determine what their preferred setting is; you can then choose to enable/disable the AllowPSTNOnlyMeetingsByDefault setting accordingly.\n","date":"2018-10-28T00:00:00Z","image":"/images/MSTeams-FI.jpg","permalink":"/configuring-pin-less-entry-for-skype-for-business-online-audio-conferencing/","title":"Configuring PIN-less Entry for Skype for Business Online Audio Conferencing"},{"content":"Towards the back end of last year, I discovered the joys and simplicity of Visual Studio Team Services (VSTS)/Azure DevOps. Regardless of what type of development workload that you face, the service provides a whole range of features that can speed up development, automate important build/release tasks and also assist with any testing workloads that you may have. Microsoft has devoted a lot of attention to ensuring that their traditional development tools/languages and new ones, such as Azure Data Factory V2, are fully integrated with VSTS/Azure DevOps. And, even if you do find yourself fitting firmly outside the Microsoft ecosystem, there are a whole range of different connectors to enable you to, for example, leverage Jenkins automation tasks for an Amazon Web Services (AWS) deployment. As with a lot of things to do with Microsoft today, you could not have predicted such extensive third-party support for a core Microsoft application 10 years ago. 🙂\nAutomated release deployments are perhaps the most useful feature that is leverageable as part of VSTS/Azure DevOps. These address several business concerns that apply to organisations of any size:\nRemoves human intervention as part of repeatable business processes, reducing the risk of errors and streamlining internal processes. Allows for clearly defined, auditable approval cycles for release approvals. Provides developers with the flexibility for structuring deployments based on any predefined number of release environments. You may be questioning at this stage just how complicated implementing such processes are, versus the expected benefits they can deliver. Fortunately, when it comes to Azure App Service deployments at least, there are predefined templates provided that should be suitable for most basic deployment scenarios:\nThis template will implement a single task to deploy to an Azure App Service resource. All you need to do is populate the required details for your subscription, App Service name etc. and you are ready to go! Optionally, if you are working with a Standard App Service Plan or above, you can take advantage of the slot functionality to stage your deployments before impacting on any live instance. This option is possible to set up by specifying the name of the slot you wish to deploy to as part of Azure App Service Deploy task and then adding on an Azure App Service Manage task to carry out the swap slot - with no coding required at any stage:\nThere may also be some additional options that need configuring as part of the Azure App Service Deploy task:\nPublish using Web Deploy: I would recommend always leaving this enabled when deploying to Azure Web Apps, given the functionality afforded to us via the Kudu Web API. Remove additional files at destination: Potentially not a safe option should you have a common requirement to interact with your App Service folder manually, in most cases, leaving this enabled ensures that your target environment remains consistent with your source project. Exclude files from the App_Data folder: Depending on what your web application is doing, it\u0026rsquo;s possible that some required data will exist in this folder that assists your website. You can prevent these files from being removed in tandem with the previous setting by enabling this. Take App Offline: Slightly misleading in that, instead of stopping your App Service, it instead places a temporary file in the root directory that tells all website visitors that the application is offline. This temporary page will use one of the default templates that Azure provides for App Service error messages. This last setting, in particular, can cause some problems, particularly when it comes to working with .NET Core web applications:\nNote also that the problem does not occur when working with a standard .NET Framework MVC application. Bearing this fact in mind, therefore, suggests that the problem is a specific symptom relating to .NET Core. It also occurs when utilising slot deployments, as indicated above.\nTo get around this problem, we must address the issue that the Error Code points us toward - namely, the fact that web application files within the target location cannot be appropriately accessed/overwritten, due to being actively used by the application in question. The cleanest way of fixing this is to take your application entirely offline by stopping the App Service instance, with the apparent trade-off being downtime for your application. This scenario is where the slot deployment functionality goes from being an optional, yet useful, requirement to a wholly mandatory one. With this enabled (and the matching credit card limit to accommodate), it is possible to implement the following deployment process:\nCreate a staging slot on Azure for your App Service, with a name of your choosing Structure a deployment task that carries out the following steps, in order: Stop the staging slot on your chosen App Service instance. Deploy your web application to the staging slot. Start the staging slot on your chosen App Service instance. (Optionally) Execute any required steps/logic that is necessary for the application (e.g. compile MVC application views, execute a WebJob etc.) Perform a Swap Slot, promoting the staging slot to your live, production instance. The below screenshot, derived from VSTS/Azure DevOps, shows how this task list should be structured:\nEven with this in place, I would still recommend that you look at taking your web application \u0026ldquo;offline\u0026rdquo; in the minimal sense of the word. The Take App Offline option is by far the easiest way of achieving this; for more tailored options, you would need to look at a specific landing page that redirects users during any update/maintenance cycle, which provides the necessary notification to any users.\nSetting up your first deployment pipeline(s) can throw up all sorts of issues that lead you down the rabbit hole. While this can be discouraging and lead to wasted time, the end result after any perserverence is a highly scalable solution that can avoid many sleepless nights when it comes to managing application updates. And, as this example so neatly demonstrates, solutions to these problems often do not require a detailed coding route to implement - merely some drag \u0026amp; drop wizardry and some fiddling about with deployment task settings. I don\u0026rsquo;t know about you, but I am pretty happy with that state of affairs. 🙂\n","date":"2018-10-21T00:00:00Z","image":"/images/Azure-Pipelines-e1557238792964.png","permalink":"/error_file_in_use-when-deploying-net-core-app-to-azure-app-service-vsts-azure-devops/","title":"ERROR_FILE_IN_USE when Deploying .NET Core App to Azure App Service (VSTS/Azure DevOps)"},{"content":"The importance of segregated deployment environments for any IT application cannot be understated. Even if this only comprises of a single test/User Acceptance Testing (UAT) environment, there are a plethora of benefits involved, which should easily outweigh any administrative or time effort involved:\nThey provide a safe \u0026ldquo;sandbox\u0026rdquo; for any functionality or developmental changes to be carried in sequence and verify the intended outcome. They enable administrators to carry out the above within regular business hours, without risk of disruption to live data or processes. For systems that experience frequent updates/upgrades, these types of environments allow for major releases to be tested in advance of a production environment upgrade. When it comes Dynamics 365 Customer Engagement (D365CE), Microsoft more recently have realised the importance that dedicated test environments have for their online customers. That\u0026rsquo;s why, with the changes announced during the transition away from Dynamics CRM, a free Sandbox instance was granted with every subscription. The cost involved in provisioning these can start to add up significantly over time, so this change was and remains most welcome; mainly when it means that any excuse to carrying out the bullet point highlighted above in bold immediately gets thrown off the table.\nAnyone who is presently working in the D365CE space will be intently aware of the impending forced upgrade to version 9 of the application that must take place within the next few months. Although this will doubtless cause problems for some organisations, it is understandable why Microsoft is enforcing this so stringently and - if managed correctly - allows for a more seamless update experience in the future, getting new features into the hands of customers much faster. This change can only be a good thing, as I have argued previously on the blog. As a consequence, many businesses may currently have a version 9 Sandbox instance within their Office 365 tenant which they are using to carry out the types of tests I have referenced already, which typically involves testing custom developed or ISV Managed/Unmanaged Solutions. One immediate issue you may find if you are working with solutions containing the Marketing List (list) entity is that your Solution suddenly stops importing successfully, with the following error messages:\nThe problem relates to changes to the Marketing List entity in version 9 of the application, specifically the Entity property IsVisibleInMobileClient. This assertion is confirmed by comparing the Value and CanBeChanged properties of these settings in both an 8.2 and 9.0 instance, using the Metadata Browser tool included in the SDK:\nMicrosoft has published a support article which lists the steps involved to get this resolved for every new solution file that you find yourself having to ship between 8.2 and 9.0 environments. Be careful when following the suggested workaround steps, as modifications to the Solution file should always be considered a last resort means of fixing issues and can end up causing you no end of hassle if applied incorrectly. There is also no way of fixing this issue at source as well, all thanks to the CanBeChanged property being set to false (this may be a possible route of resolution if you are having this same problem with another Entity that has this value set to true, such as a custom entity). Although Microsoft promises a fix for this issue, I wouldn\u0026rsquo;t necessarily expect that 8.2 environments will be specially patched to resolve this particular problem. Instead, I would take the fix to mean the forced upgrade of all 8.2 environments to version 9.0/9.1 within the next few months. Indeed, this would seem to be the most business-sensible decision rather than prioritising a specific fix for an issue that is only going to affect a small number of deployments.\n","date":"2018-10-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/solution-import-errors-on-marketing-list-entity-dynamics-365-customer-engagement-version-9-0/","title":"Solution Import Errors on Marketing List Entity (Dynamics 365 Customer Engagement Version 9.0+)"},{"content":"Earlier this month, a colleague escalated an issue to me involving Dynamics CRM/365 Customer Engagement (CRM/D365CE), specifically relating to email tracking. This feature is by far one of the most useful and unwieldy within the application, if not configured correctly. In days of yore, the setup steps involved could be tedious to implement, mainly if you were operating within the confines of a hybrid environment (for example, CRM 2015 on-premises and Exchange Server Online). Or, you could have been one of the handful of unfortunate individuals on the planet today that had to rely on the abomination that is the Email Router. We can be thankful today that Server-Side Synchronization is the sole method for pulling in emails from any manner of SMTP or POP3 mail servers; although note that only Exchange mailboxes support Appointment, Contact \u0026amp; Task synchronisation. Lucky though we are to be living in more enlightened times, careful attention and management of Server-Side Synchronization deployments is still an ongoing requirement. This is primarily to ensure all mailboxes operate as intended and - most critically - to ensure that only the most relevant emails are tagged back into the application, and not instead a deluge of unrelated correspondence.\nGoing back to the issue mentioned at the start of this post - a user in question was having a problem with certain emails not synchronising automatically back into the application, even though the emails in question had a corresponding Contact record within CRM/D365CE. We were also able to observe that other emails sent from the user to the Contact record(s) in question were being tagged back without issue. When first diagnosing problems like this, you can forgive yourself for not straight away making a beeline to the user\u0026rsquo;s Mailbox record within the application to verify that:\nThe Mailbox is enabled for Server-Side Synchronization for Incoming/Outgoing Email. No processing errors are occurring that could be preventing emails from being successfully handled by the application. Although not likely (more often than not) to be the cause of any mail flow issues, it is worthwhile not to potentially overcomplicate a technical issue at the first juncture by overlooking anything obvious. 🙂\nAs we can see in this example, there are no problems with the over-arching Server-Side Synchronization configuration, nor are there any problems with the individual mailbox. It is at this point that we must refer to the following screen that all users in the application have access to via the gear icon at the top of the screen - the User Options screen:\nThe Track option allows users to specify how CRM/D365CE handles automatic email tracking, based on four options:\nAll Email Messages: Does exactly what it says on the tin, and is not recommended to leave on as default, for the reasons I alluded to earlier. Email messages in response to Dynamics 365 Email: Only emails sent from within Dynamics 365 (or tracked accordingly via Outlook) will be stored in the application, alongside any replies that are received. Email messages from Dynamics 365 Leads, Contacts and Accounts: Only emails which match back to the record types listed, based on email address, will be stored within the application. Email messages from Dynamics 365 records that are email enabled: The same as the previous option, but expanded out to include all record types that are configured with the Sending email\u0026hellip; option on the Entity configuration page. For the user who was having email tracking issues, the default setting specified was Email messages in response to Dynamics 365 Email. So, to resolve the issue, it is necessary for the user to update their settings to either the 3rd or 4th option.\nAny situation that involves detailed, technical configuration by end-users are generally ones that I like to avoid - for a few simple, business-relevant reasons:\nIT/Technical teams should be the ones making configuration changes to applications, not end users who have not had training or experience on the steps they are being asked to follow. End-users are busy, and it is always essential that we are conscious of their time and in making any interaction short and positive as opposed to long and arduous. If the above instructions are relayed over the telephone, as opposed to in-person, then the propensity for mistakes to occur rises significantly. However, from what we have seen so far, it will be necessary to access the application as the user to make the change - either by taking control of their session or by (perish the thought) relaying user credentials to enable someone in IT support to make the configuration change. Don\u0026rsquo;t EVER do this option by the way! Fortunately, there is a better way of updating user profile settings, using a tool whose importance has been highlighted in no uncertain terms previously on the blog - the XrmToolbox. One of the handiest out of the box tools that this provides is the User Settings Utility which\u0026hellip;well\u0026hellip;see for yourself:\nAs a consequence, application administrators can \u0026ldquo;magically\u0026rdquo; modify any of the settings contained within the User Options page, including - as we can see below - the Track email messages setting:\nWith a few clicks, the appropriate changes can be applied not just to a single user, but to everyone within the application - avoiding any potential end-user confusion and making our jobs easier. This simple fact is another reason why you should immediately launch the XrmToolBox whenever you find yourself with a CRM/D365CE issue that stumps you and why the community tools available for the application are top-notch.\n","date":"2018-10-07T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/incoming-emails-not-synchronizing-in-dynamics-crm-dynamics-365-customer-engagement/","title":"Incoming Emails Not Synchronizing in Dynamics CRM/Dynamics 365 Customer Engagement"},{"content":"The introduction of Azure Data Factory V2 represents the most opportune moment for data integration specialists to start investing their time in the product. Version 1 (V1) of the tool, which I started taking a look at last year, missed a lot of critical functionality that - in most typical circumstances - I could do in a matter of minutes via a SQL Server Integration Services (SSIS) DTSX package. The product had, to quote some specific examples:\nNo support for control low logic (foreach loops, if statements etc.) Support for only \u0026ldquo;basic\u0026rdquo; data movement activities, with a minimal capability to perform or call data transformation activities from within SQL Server. Some support for the deployment of DTSX packages, but with incredibly complex deployment options. Little or no support for Integrated Development Environment (IDE\u0026rsquo;s), such as Visual Studio, or other typical DevOps scenarios. In what seems like a short space of time, the product has come on leaps and bounds to address these limitations:\nThe final one is a particularly nice touch, and means that you can straightforwardly incorporate Azure Data Factory V2 as part of your DevOps strategy with minimal effort - an ARM Resource Template deployment, containing all of your data factory components, will get your resources deployed out to new environments with ease. What\u0026rsquo;s even better is that this deployment template is intelligent enough not to recreate existing resources and only update Data Factory resources that have changed. Very nice.\nAlthough a lot is provided for by Azure Data Factory V2 to assist with a typical DevOps cycle, there is one thing that the tool does not account for satisfactorily. A critical aspect as part of any Azure Data Factory V2 deployment is the implementation of Triggers. These define the circumstances under which your pipelines will execute, typically either via an external event or based on a pre-defined schedule. Once activated, they effectively enter a \u0026ldquo;read-only\u0026rdquo; state, meaning that any changes made to them via a Resource Group Deployment will be blocked and the deployment will fail - as we can see below when running the New-AzureRmResourceGroupDeployment cmdlet directly from PowerShell:\nThe solution is simple - stop the Trigger programmatically as part of your DevOps execution cycle via the handy Stop-AzureRmDataFactoryV2Trigger. This step involves just a single line PowerShell Cmdlet that is callable from an Azure PowerShell task. But what happens if you are deploying your Azure Data Factory V2 template for the first time?\nThe best (and only) resolution to get around this little niggle will be to construct a script that performs the appropriate checks on whether a Trigger exists to stop and skip over this step if it doesn\u0026rsquo;t yet exist. The following parameterised PowerShell script file will achieve these requirements by attempting to stop the Trigger called \u0026lsquo;MyDataFactoryTrigger\u0026rsquo;:\nparam($rgName, $dfName) Try { Write-Host \u0026#34;Attempting to stop MyDataFactoryTrigger Data Factory Trigger...\u0026#34; Get-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName \u0026#39;MyDataFactoryTrigger\u0026#39; -ErrorAction Stop Stop-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName \u0026#39;MyDataFactoryTrigger\u0026#39; -Force Write-Host -ForegroundColor Green \u0026#34;Trigger stopped successfully!\u0026#34; } Catch { $errorMessage = $_.Exception.Message if($errorMessage -like \u0026#39;*NotFound*\u0026#39;) { Write-Host -ForegroundColor Yellow \u0026#34;Data Factory Trigger does not exist, probably because the script is being executed for the first time. Skipping...\u0026#34; } else { throw \u0026#34;An error occured whilst retrieving the MyDataFactoryTrigger trigger.\u0026#34; } } Write-Host \u0026#34;Script has finished executing.\u0026#34; To use successfully within Azure DevOps, be sure to provide values for the parameters in the Script Arguments field:\nWith some nifty copy + paste action, you can accommodate the stopping of multiple Triggers as well - although if you have more than 3-4, then it may be more sensible to perform some iteration involving an array containing all of your Triggers, passed at runtime.\nFor completeness, you will also want to ensure that you restart the affected Triggers after any ARM Template deployment. The following PowerShell script will achieve this outcome:\nparam($rgName, $dfName) Try { Write-Host \u0026#34;Attempting to start MyDataFactoryTrigger Data Factory Trigger...\u0026#34; Start-AzureRmDataFactoryV2Trigger -ResourceGroupName $rgName -DataFactoryName $dfName -TriggerName \u0026#39;MyDataFactoryTrigger\u0026#39; -Force -ErrorAction Stop Write-Host -ForegroundColor Green \u0026#34;Trigger started successfully!\u0026#34; } Catch { throw \u0026#34;An error occured whilst starting the MyDataFactoryTrigger trigger.\u0026#34; } Write-Host \u0026#34;Script has finished executing.\u0026#34; The Azure Data Factory V2 offering has no doubt come leaps and bounds in a short space of time\u0026hellip; \u0026hellip;but you can\u0026rsquo;t shake the feeling that there is a lot that still needs to be done. The current release, granted, feels very stable and production-ready, but I think there is a whole range of enhancements that could be introduced to allow better feature parity when compared with SSIS DTSX packages. With this in place, and when taking into account the very significant cost differences between both offerings, I think it would make Azure Data Factory V2 a no-brainer option for almost every data integration scenario. The future looks very bright indeed 🙂\n","date":"2018-09-30T00:00:00Z","image":"/images/ADF-FI.png","permalink":"/managing-azure-data-factory-triggers-during-an-azure-devops-deployment/","title":"Managing Azure Data Factory Triggers During an Azure DevOps Deployment"},{"content":"The very nature of how businesses or organisations operate means that the sheer volume of sensitive or confidential data that can grow over time presents a genuine challenge from a management and security point of view. Tools and applications like cloud storage, email and other information storage services can do great things; but on occasions where these are abused, such as when an employee emails out a list of business contacts to a personal email address, the penalties cannot just be a loss of reputation. Even more so with the introduction of GDPR earlier this year, there is now a clear and present danger that such actions could result in unwelcome scrutiny and also a fine for larger organisations for simply not putting the appropriate technical safeguards in place. Being able to proactively - and straightforwardly - identify \u0026amp; classify information types and enforce some level of control over their dissemination, while not a silver bullet in any respect, does at least demonstrate an adherence to the \u0026ldquo;appropriate technical controls\u0026rdquo; principle that GDPR in particular likes to focus on.\nAzure Information Protection (AIP) seeks to address these challenges in the modern era by providing system administrators with a toolbox to enforce good internal governance and controls over documents, based on a clear classification system. The solution integrates nicely with Azure and also any on-premise environment, meaning that you don\u0026rsquo;t necessarily have to migrate your existing workloads into Office 365 to take full advantage of the service. It also offers:\nUsers the ability to track any sent document(s), find out where (and when) they have been read and revoke access at any time. Full integration with the current range of Office 365 desktop clients. The capability to protect non-Office documents, such as PDF\u0026rsquo;s, and requiring users to open them via a dedicated client, which checks their relevant permissions before granting access. Automation capabilities via PowerShell to bulk label existing document stores, based on parameters such as files names or contents of the data (for example, mark as highly sensitive any document which contains a National Insurance Number). Overall, I have found AIP to be a sensible and highly intuitive solution, but one that requires careful planning and configuration to realise its benefits fully. Even with this taken for granted, there is no reason why any business/organisation cannot utilise the service successfully.\nHowever, if you are a small to medium size organisation, you may find that the Azure Information Protection offering has traditionally lacked some critical functionality. You can see what I mean by performing a direct comparison between two flavours of Office 365 deployments - Office Business Premium (\u0026ldquo;BizPrem\u0026rdquo;) \u0026amp; Office Professional Plus (\u0026ldquo;ProPlus\u0026rdquo;). For those who are unaware of the differences:\nOffice Business Premium is the version of Office apps that you can download with a\u0026hellip;you guessed it\u0026hellip;Office Business Premium subscription. This product/SKU represents the optimal choice if you are dealing with a deployment that contains less than 250 users and you want to fully utilise the whole range of features included on Office 365. Office Professional Plus is the edition bundled with the following, generally more expensive subscriptions: Office 365 Education A1* Office 365 Enterprise E3 Office 365 Government G3 Office 365 Enterprise E4 Office 365 Education A4* Office 365 Government G4 Office 365 Enterprise E5 Office 365 Education A5*For the extra price, you get a range of additional features that may be useful for large-scale IT deployments. This includes, but is not limited to, Shared Computer Activation, support for virtualised environments, group policy support and - as has traditionally been the case - an enhanced experienced while using AIP. * In fact, these subscriptions will generally be the cheapest going on Office 365, but with the very notable caveat being that you have to be a qualifying education institute to purchase them. So no cheating I\u0026rsquo;m afraid 🙂\nThe salient point is that both of these Office versions support the AIP Client, the desktop application that provides the following additional button within your Office applications:\nThe above example, taken from an Office Business Premium deployment, differs when compared to Office Professional Plus:\nAs mentioned in the introduction, the ability for users to explicitly set permissions on a per-document basis can be incredibly useful but is one that has been missing entirely from non-Office Business Premium subscriptions. This limitation means that users have lacked the ability to:\nSpecify (and override) the access permissions for the document - Viewer, Reviewer, Co-Author etc. Assign permissions to individual users, domains or distribution/security groups. Define a specified date when all access permissions will expire. You will still be able to define organisation-level policies that determine how documents can be shared, based on a user-defined label, but you lose a high degree of personal autonomy that the solution can offer users, which - arguably - can be an essential factor in ensuring the success of the AIP deployment.\nWell, the good news is, that all of this is about to change, thanks to the September 2018 General Availability wave for Azure Information Protection This \u0026ldquo;by design\u0026rdquo; behaviour has, understandably, been a source of frustration for many, but, thanks to a UserVoice suggestion, is now no longer going to be a concern:\nIn the coming AIP September GA we will update the Office client requirement with the following:\n\u0026ldquo;Office 365 with Office 2016 apps (minimum version 1805, build 9330.2078) when the user is assigned a license for Azure Rights Management (also known as Azure Information Protection for Office 365)\u0026rdquo;\nThis will allow the support of the AIP client to use protection labels in other Office subscriptions which are not ProPlus. This will require the use of Office clients which are newer then the version mentioned above and the end user should be assigned with the proper licence.\nThe introduction of this change was confirmed by the release of version 1.37.19.0 of the AIP client on Monday this week and is a much welcome new addition to the client. Be aware though of the requirement to be using the May 2018 build of Office 2016 apps to take advantage of this new functionality. Once overcome, this change suddenly makes the AIP offering a lot more powerful for existing small business users and a much easier sell for those who are contemplating adopting the product but cannot tolerate the cost burden associated with an Enterprise subscription. Microsoft is continually endeavouring to ensure a consistent \u0026ldquo;feedback loop\u0026rdquo; is provided for all users and customers to offer product improvement suggestions, and it is great to see this working in practice with AIP as our example. Now\u0026rsquo;s as good as time as any to evaluate AIP if you haven\u0026rsquo;t before.\n","date":"2018-09-23T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/the-september-2018-general-availability-release-for-the-azure-information-protection-client/","title":"The September 2018 General Availability Release for the Azure Information Protection Client"},{"content":"Is it just me or is British Summer Time (BST) AKA Daylight Saving Time (DST) an utterly pointless endeavour these days? Admittedly, on its introduction in 1916, it fulfilled a sensible objective - to provide more daylight hours during the summer. For agricultural, construction or other services that are reliant on sufficient light to carry out their work, this was a godsend. In today\u0026rsquo;s modern world, with the proliferation of electricity and lighting services in almost every corner of the UK, the whole concept now appears to be a curious relic of the western world. No major Asian, African, South American country adopts the practice and, given the increased importance that these continents now play on the global stage, it wouldn\u0026rsquo;t surprise me if the whole concept becomes consigned to the scrapheap within our lifetimes.\nMy fury concerning BST surfaces thanks to my experience working with IT systems and, in particular, Microsoft Azure. Typically, any service that has a Windows OS backend involved will do a pretty good job in determining the correct locale settings that need applying, including BST/DST. These settings will generally inherit into most applications installed on the machine, including SQL Server. You can verify this at any time by running the following query, kindly demonstrated by SQL Server legend Pinal Dave, on your SQL Server instance:\nAs you can see from the underlying query, it is explicitly checking a Registry Key Value on the Windows Server where SQL Server resides - which has been set up for British (UK) locale settings. The Registry Key folder will, additionally, include information to tell the machine when to factor in BST/DST.\nThis is all well and good if we are managing dedicated, on-premise instances of SQL Server, where we have full control over our server environments. But what happens on a Single Azure SQL database within the UK South region? The above code snippet is not compatible with Azure SQL, so we have no way of finding out ourselves. We must turn to the following blog post from Microsoft to clarify things for us:\nCurrently, the default time zone on Azure SQL DB is UTC. Unfortunately, there is not possible to change by server configuration or database configuration.\nWe can verify this by running a quick GETDATE() query and comparing it against the current time during BST:\n(@@VERSION returns the current edition/version of the SQL instance which, in this case, we can confirm is Azure SQL)\nThe result of all of this is that all date/time values in Azure SQL will be stored in UTC format, meaning that you will have to manage any conversions yourself between interfacing applications or remote instances. Fortunately, there is a way that you can resolve this issue without ever leaving Azure SQL.\nOn all versions of Azure SQL Server (and for on-premise from SQL Server 2016 onwards), Microsoft provides a system view that returns all time zones that are supported for the instance. Using this query, we can determine the correct timezone instance to use for BST by filtering for all time zones:\nSELECT * FROM sys.time_zone_info ORDER BY [name] As highlighted above, for BST, GMT Standard Time is our timezone of choice, and we can see the correct offset. An additional flag field is included to indicate whether it is currently applicable or not.\nWith the name value in hand, we have everything we need to start working with a query hint that I was overjoyed to recently discover - AT TIME ZONE. When included as part of selecting a date field type (datetime, datetime2 etc.), it adds the time-offset value to the end of the date value. So, with some tinkering to our earlier GETDATE() query, we get the following output:\nSELECT GETDATE() AT TIME ZONE \u0026#39;GMT Standard Time\u0026#39;, @@VERSION While this is useful, in most cases, we would want any offset to be automatically applied against our Date/Time value. With some further refinement to the query via the DATEADD function, this requirement becomes achievable, and we can also view each value separately to verify everything is working as intended:\nSELECT GETDATE() AS CurrentUTCDateTime, GETDATE() AT TIME ZONE \u0026#39;GMT Standard Time\u0026#39; AS CurrentUTCDateTimeWithGMTOffset, DATEADD(MINUTE, DATEPART(tz, GETDATE() AT TIME ZONE \u0026#39;GMT Standard Time\u0026#39;), GETDATE()) AS CurrentGMTDateTime, @@VERSION AS SQLVersion Even better, the above works regardless of whether the offset is an increase or decrease to UTC. We can verify this fact by adjusting the query to instead convert into Haiti Standard Time which, at the time of writing, is currently observing DST and has a 4 hour UTC offset:\nSo as we can see, a few lines of code means that we can straightforwardly work with data in our desired locale. 🙂\nIt does seem somewhat counter-intuitive that a service, such as Azure SQL, hosted within a specific location, does not work in the correct date/time formats for that region. When you consider the global scale of the Azure network and the challenges this brings to the fore, the decision to revert to a single time zone for all systems does make sense and provides a level \u0026amp; consistent playing field for developers. One comment I would have is that this particular quirk does not appear to be well signposted for those who are just getting started with the service, an omission that could cause severe business or technical problems in the long term if not correctly detected. What is ultimately most fortuitous is the simple fact that no overly complex coding or client application changes are required to fix this quirk. Which is how all IT matters should be - easy and straightforward to resolve.\n","date":"2018-09-16T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/handling-british-summer-time-daylight-savings-time-in-azure-sql/","title":"Handling British Summer Time/Daylight Savings Time in Azure SQL"},{"content":"When it comes to handling large datasets in a formal, structured and highly scalable manner, nothing beats SQL Server. Having worked with the product for almost six years, I always look forward to the opportunity of putting together some SQL queries or to build out a new database. I think of it as a nice little treat, a reward amongst the midst of other, sometimes tedious, tasks that I have to deal with on a weekly basis. I rank knowledge of SQL Server pretty highly if you are aiming to become a serious Dynamics CRM/365 for Customer Engagement professional and I credit my experience with SQL as one of the things that helped to streamline my journey into Dynamics.\nIt may be, however, that others are not as keen at the prospect of working with SQL databases, particularly when it is unable to accommodate some of the alternative data storage mechanisms that are commonplace with application developers. A good example of this is JSON (JavaScript Object Notation), a format that is used widely today as a portable and \u0026ldquo;easy to read\u0026rdquo; mechanism of transferring data. For someone who is more used to working with SQL, getting your head around JSON can be a bit of a challenge at first and - traditionally - was not something that Microsoft factored into the design of SQL Server. A lot has changed with Microsoft - to the extent that services such as Microsoft Flow and Azure now use JSON extensively - and with SQL Server, as a whole host of related functions were added to SQL Server 2014 to provide straightforward conversions into JSON. The FOR JSON PATH clause is the primary gateway into this world and is a function which I have slowly, but surely, started to get my head around. What I wanted to do in this week\u0026rsquo;s blog post was provide a \u0026ldquo;crash course\u0026rdquo; on how to use this nifty piece of functionality, hopefully with the aim of giving you everything you need to start utilising it in your environment(s).\nBefore we begin\u0026hellip; To best illustrate how the clause works in practice, it is necessary to create an appropriate set of inter-related tables within SQL Server, that will be used for all examples that follow. Here\u0026rsquo;s one I (rather unimaginatively) made earlier:\nThe code snippets to create them can be found below:\nCREATE TABLE dbo.[ParentTable] ( ParentTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, Field1\tBIT\tNOT NULL, Field2\tCHAR(10)\tNULL, Field3\tVARCHAR(MAX) NULL ) GO CREATE TABLE dbo.[ChildTable1] ( ChildTable1ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL, Field1\tBIT\tNOT NULL, Field2\tCHAR(10)\tNULL, Field3\tVARCHAR(MAX) NULL ) CREATE TABLE dbo.[ChildTable2] ( ChildTable2ID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, ParentTableID INT FOREIGN KEY REFERENCES dbo.[ParentTable](ParentTableID) NULL, Field1\tBIT\tNOT NULL, Field2\tCHAR(10)\tNULL, Field3\tVARCHAR(MAX) NULL ) GO CREATE TABLE dbo.[GrandchildTable] ( GrandchildTableID INT IDENTITY(1,1) PRIMARY KEY NOT NULL, ChildTable2ID INT FOREIGN KEY REFERENCES dbo.[ChildTable2](ChildTable2ID) NULL, Field1\tBIT\tNOT NULL, Field2\tCHAR(10)\tNULL, Field3\tVARCHAR(MAX) NULL ) GO The table structures are incredibly basic, but note, in particular, the FOREIGN KEY relationships from the 2 Child Tables to the Parent and also the additional parent/child relationship between the GrandchildTable and ChildTable2. You will also need to look at inserting some test data into the tables to properly follow through the rest of this post.\nWith our environment prepped, let\u0026rsquo;s take a look at the different ways we can convert our dataset into JSON format, with minimal effort involved. Example 1: FOR JSON AUTO If we were to look at doing a straightforward SELECT * query on all our tables, our query and expected output might look something like this:\nSELECT * FROM dbo.ParentTable AS PT INNER JOIN dbo.ChildTable1 AS CT1 ON PT.ParentTableID = CT1.ParentTableID INNER JOIN dbo.ChildTable2 AS CT2 ON PT.ParentTableID = CT2.ParentTableID INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID Our main issue with this query is that, because of how T-SQL works, the 2 ParentTable records are returned for every child and grandchild record. For a client application, this can be somewhat cumbersome to handle. FOR JSON AUTO can be straightforwardly added to the end of the above query to convert the query output accordingly:\nSELECT * FROM dbo.ParentTable AS PT INNER JOIN dbo.ChildTable1 AS CT1 ON PT.ParentTableID = CT1.ParentTableID INNER JOIN dbo.ChildTable2 AS CT2 ON PT.ParentTableID = CT2.ParentTableID INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON AUTO //Example output of the first 25 lines below: [{ \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34;, \u0026#34;CT1\u0026#34;: [{ \u0026#34;ChildTable1ID\u0026#34;: 1, \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34;, \u0026#34;CT2\u0026#34;: [{ \u0026#34;ChildTable2ID\u0026#34;: 1, \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: false, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34;, \u0026#34;GT\u0026#34;: [{ \u0026#34;GrandchildTableID\u0026#34;: 1, \u0026#34;ChildTable2ID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: false, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }] ... This output provides a much more sensible structure, with no record duplication and proper nesting of child records.\nExample 2: FOR JSON PATH, ROOT With some modifications to the above query, it is also possible to specify names for each root element. This can be tailored depending on your specific requirements. For example, let\u0026rsquo;s say we had to provide the following root element names for each of the example tables:\ndbo.ParentTable -\u0026gt; Parent dbo.ChildTable1 -\u0026gt; FirstChildTable dbo.ChildTable2 -\u0026gt; SecondChildTable The following query would achieve these requirements, in addition to adding a master root element name of MyTestSQLJSONObject:\nSELECT PT.ParentTableID AS [Parent.ParentTableID], PT.Field1 AS [Parent.Field1], PT.Field2 AS [Parent.Field2], PT.Field3 AS [Parent.Field3], ChildTable1ID AS [FirstChildTable.ChildTable1ID], CT1.Field1 AS [FirstChildTable.Field1], CT1.Field2 AS [FirstChildTable.Field2], CT1.Field3 AS [FirstChildTable.Field3], CT2.ChildTable2ID AS [SecondChildTable.ChildTable1ID], CT2.Field1 AS [SecondChildTable.Field1], CT2.Field2 AS [SecondChildTable.Field2], CT2.Field3 AS [SecondChildTable.Field3], GT.GrandchildTableID AS [GrandchildTable.GrandchildTableID], GT.Field1 AS [GrandchildTable.Field1], CT2.Field2 AS [GrandchildTable.Field2], CT2.Field3 AS [GrandchildTable.Field3] FROM dbo.ParentTable AS PT INNER JOIN dbo.ChildTable1 AS CT1 ON PT.ParentTableID = CT1.ParentTableID INNER JOIN dbo.ChildTable2 AS CT2 ON PT.ParentTableID = CT2.ParentTableID INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON PATH, ROOT(\u0026#39;MyTestSQLJSONObject\u0026#39;) //Example of first 25 lines below: { \u0026#34;MyTestSQLJSONObject\u0026#34;: [{ \u0026#34;Parent\u0026#34;: { \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }, \u0026#34;FirstChildTable\u0026#34;: { \u0026#34;ChildTable1ID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }, \u0026#34;SecondChildTable\u0026#34;: { \u0026#34;ChildTable1ID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: false, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }, \u0026#34;GrandchildTable\u0026#34;: { \u0026#34;GrandchildTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: false, \u0026#34;Field2\u0026#34;: \u0026#34;Test \u0026#34;, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; ... Example 3: NULL Field Values One thing worth bearing in mind when working with the FOR JSON clause is how NULL field values behave. Take a look at the following example query output from dbo.ParentTable:\nWhen attempting to query this single record using the FOR JSON AUTO clause, we get the following output:\n//Example output below. Notice that no field name/value is returned for Field2 now [{ \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }] If you have a requirement always to return a value for every NULL field, then you can use the INCLUDE_NULL_VALUES option to get around this:\nSELECT * FROM dbo.ParentTable AS PT WHERE PT.ParentTableID = 1 FOR JSON AUTO, INCLUDE_NULL_VALUES //Notice now that Field2 returns as expected [{ \u0026#34;ParentTableID\u0026#34;: 1, \u0026#34;Field1\u0026#34;: true, \u0026#34;Field2\u0026#34;: null, \u0026#34;Field3\u0026#34;: \u0026#34;This is a test record\u0026#34; }] Even with this option specified, there may still be issues with outputting this field with a value of null. In these scenarios, on a T-SQL side, you would generally use the ISNULL function to replace NULL values with an empty string. Further, because the field type in this example is a CHAR(10) data type, there are ten characters of whitespace that need removing from the output string. The following query will fix both of these problems:\nSELECT ParentTableID, Field1, LTRIM(ISNULL(Field2, \u0026#39;\u0026#39;)) AS Field2, Field3 FROM dbo.ParentTable AS PT WHERE PT.ParentTableID = 1 FOR JSON AUTO Example 4: Using sub-queries to return child records as JSON objects In most cases involving parent and child records, returning all the data as JSON may not be required. Instead, it may be necessary to return the fields from the parent record only, and all child records as a single JSON object field on the parent record. Using Subqueries, we can accommodate this scenario as follows\nSELECT PT.ParentTableID, PT.Field1, PT.Field2, PT.Field3, ( SELECT * FROM dbo.ChildTable1 AS CT1 WHERE CT1.ParentTableID = PT.ParentTableID FOR JSON AUTO ) AS ChildTable1, ( SELECT * FROM dbo.ChildTable2 AS CT2 INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID WHERE CT2.ParentTableID = PT.ParentTableID FOR JSON AUTO ) AS ChildTable2 FROM dbo.ParentTable AS PT Example 5: Storing FOR JSON Query Output in Parameters In most scenarios, you will generally provide functions or Stored Procedures for developers to interface with when connecting to the database. It is in this situation where the ability to store the output of any query - including those that use the FOR JSON clause - within a parameter becomes very useful. The following snippet will store the output of a FOR JSON query into a parameter called @JSON, which is then retrievable at any time via a SELECT query:\nDECLARE @JSON NVARCHAR(MAX) = (SELECT * FROM dbo.ChildTable2 AS CT2 INNER JOIN dbo.GrandchildTable AS GT ON CT2.ChildTable2ID = GT.ChildTable2ID FOR JSON AUTO) SELECT @JSON Wrapping Up: A Few Things to Bear in Mind The FOR JSON clause is not compatible with Common Table Expressions (CTE\u0026rsquo;s). When storing the output of a FOR JSON query in a parameter, you have no native capability to query the inner contents of the JSON object via a SELECT\u0026hellip;WHERE query. Because there is no way of specifying the name of the single column that returns as part of a FOR JSON query, you cannot create a view that uses this clause. I hope that this post has given you all the information and guidance needed to start working with FOR JSON clauses in your queries. The ability to straightforwardly convert SQL tabular data into a format that most developers would readily scream for is a convenient feature to have at our disposal and is indicative of how far the SQL Server product has moved forward in recent years. Hopefully, this feature will save you some hassle in the future and will mean that you can make friends with your developer colleagues 🙂\n","date":"2018-09-09T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/a-beginners-guide-to-using-for-json-in-sql-server/","title":"A Beginner's Guide to using FOR JSON in SQL Server"},{"content":"I\u0026rsquo;ve gone on record previously saying how highly I rate the Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) community. Out of all the groups I have been a part of in the past, you couldn\u0026rsquo;t ask for a more diverse, highly passionate and - most importantly of all - helpful community. There are a lot of talented individuals out there which put a metric tonne of effort into providing the necessary tools, know-how and support to make our daily journey with CRM/D365CE that much easier to traverse.\nAn excellent case in point comes from the CRM DevOps extraordinaire himself, Ben Walker, who reached out me regarding my recent post on default SiteMap areas vanishing mysteriously. Now, when you are working with tools like XrmToolbox, day in, day out, the propensity towards generating facepalm moments for not noticing apparent things can increase exponentially over time. With this in mind, Ben has very kindly demonstrated a much more simplistic way of restoring missing SiteMap areas and, as he very rightly points out, the amount of hassle and time-saving the XrmToolbox can provide when you fully understand its capabilities. With this in mind, let\u0026rsquo;s revisit the scenario discussed in the previous post and go through the insanely better approach to solving this issue:\nDownload and run XrmToolbox and select the SiteMap Editor app, logging into your CRM/D365CE instance when prompted: After logging in, you should see a screen similar to the below:\nClick on the Load SiteMap button to load the SiteMap definition for the instance you are connected to. It should bear some resemblance to the below when loaded: Expand the Area (Settings) node. It should resemble the below (i.e. no Group for Process Center): Right click on the Area (Settings) node and select Add Default SiteMap Area button. Clicking this will launch the SiteMap Component Picker window, which lists all of the sitemap components included by default in the application. Scroll down, select the ProcessCenter option. Then, after ticking the Add child components too checkbox, press OK. The SiteMap Editor will then add on the entire group node for the ProcessCenter, including all child nodes: When you are ready, click on the Update SiteMap button and wait until the changes upload/publish into the application. You can then log onto CRM/D365CE to verify that the new area has appeared successfully. I love this alternative solution for a number of reasons. There are fewer steps involved, there is no requirement to resort to messing around with the SiteMap XML files (which has its own set of potential pitfalls, if done incorrectly) and the solution very much looks and feels like a \u0026ldquo;factory reset\u0026rdquo;, without any risk of removing other custom SiteMap areas that you may have added for alternate requirements. A huge thanks to Ben for reaching out and sharing this nifty solution and for rightly demonstrating how fantastic the CRM/D365CE community is 🙂\n","date":"2018-09-02T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-mystery-of-the-missing-workflow-sitemap-area-revisited-dynamics-365-customer-engagement/","title":"The Mystery of the Missing Workflow Sitemap Area Revisited (Dynamics 365 Customer Engagement)"},{"content":"A vital part of any DevOps automation activity is to facilitate automatic builds of code projects on regular cycles. In larger teams, this becomes particularly desirable for a multitude of reasons:\nProvides a means of ensuring that builds do not contain any glaring code errors that prevent a successful compile from taking place. Enables builds to be triggered in a central, \u0026ldquo;master\u0026rdquo; location, that all developers are regularly shipping code to. When incorporated as part of other steps during the build stage, other automation tasks can be bolted on straightforwardly - such as the running of Unit Tests and deployment of resources to development environment(s). The great news is that, when working with either Visual Studio Team Services or Team Foundation Server (VSTS/TFS), the process of setting up the first automated build definition of your project is straightforward. All steps can be completed directly within the GUI interface and - although some of the detailed configuration settings can be a little daunting when reviewing them for the first time - the drag and drop interface means that you can quickly build consistent definitions that are easy to understand at a glance.\nOne such detailed configuration setting relates to your choice of Build and Release Agent. To provide the ability to carry out automated builds (and releases), VSTS/TFS requires a dedicated agent machine designated that can be used to execute all required tasks on. There are two flavours of Build and Release Agents:\nMicrosoft Hosted: Fairly self-explanatory, this is the most logical choice if your requirements are fairly standard - for example, a simple build/release definition for an MVC ASP.NET application. Microsoft provides a range of different Build and Release Agents, covering different operating system vendors and versions of Visual Studio. Self-Hosted: In some cases, you may require access to highly bespoke modules or third-party applications to ensure that your build/release definitions complete successfully. A good example may be a non-Microsoft PowerShell cmdlet library. Or, it could be that you have strict business requirements around the storage of deployment resources. This is where Self-Hosted agents come into play. By installing the latest version of the Agent onto a computer of your choice - Windows, macOS or Linux - you can then use this machine as part of your builds/releases within both VSTS \u0026amp; TFS. You can also take this further by setting up as many different Agent machines as you like and then group these into a \u0026ldquo;pool\u0026rdquo;, thereby allowing concurrent jobs and enhanced scalability. The necessary trade-off when using Self-Hosted agents is that you must manage and maintain the machine yourself - for example, you will need to install a valid version of Visual Studio and SQL Server Data Tools if you wish to build SQL Server Database projects. What\u0026rsquo;s more, if issues start to occur, you are on your own (to a large extent) when diagnosing and resolving them. One such problem you may find is with permissions on the build agent, with variants of the following error that may crop up from time to time during your builds:\nThe error will most likely make an appearance if your Build and Release Agent goes offline or a build is interrupted due to an issue on the machine itself, and where specific files have been created mid-flight within the VSTS directory. When VSTS / TFS then re-attempts a new build and to write to/recreate the files that already exist, it fails, and the above error is displayed. I have observed that, even if the execution account on the Build Agent machine has sufficient privileges to overwrite files in the directory, you will still run into this issue. The best resolution I have found - in all cases to date - is to log in to the agent machine manually, navigate to the affected directory/file (in this example, C:\\VSTS\\_work\\SourceRootMapping\\5dc5253c-785c-4de1-b722-e936d359879c\\13\\SourceFolder.json) and delete the file/folder in question. Removing the offending items will effectively \u0026ldquo;clean slate\u0026rdquo; the next Build definition execution, which should then complete without issue.\nWe are regularly told these days of the numerous benefits that \u0026ldquo;going serverless\u0026rdquo; can bring to the table, including, but not limited to, reduced management overhead, reduced cost of ownership and faster adoption of newer technology. The great challenge with all of this is that, because no two businesses are typically the same, there is often a requirement to boot up a Virtual Machine and run a specified app within a full server environment, so that we can achieve the level of required functionality to suit our business scenario. Self-Hosted agents are an excellent example of this concept in practice, and one that is hard to prevent from being regularly utilised, irrespective of how vexatious this may make us. While the ability to use Microsoft Hosted Build and Release Agents is greatly welcome (especially given there is no cost involved), it would be nice to see if this could be \u0026ldquo;opened up\u0026rdquo; to allow additional Agent machine tailoring for specific situations. I\u0026rsquo;m not going to hold my breath in this regard though - if I were in Microsoft\u0026rsquo;s shoes, I would shudder at the thought of allowing complete strangers the ability to deploy and run custom libraries on my critical LOB application. It\u0026rsquo;s probably asking for more trouble than the convenience it would provide 🙂\n","date":"2018-08-26T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/access-to-the-path-is-denied-build-definition-error-visual-studio-team-foundation-server-team-services/","title":"'Access to the path...is denied' Build Definition Error (Visual Studio Team Foundation Server/Team Services)"},{"content":"Cybersecurity should be an ongoing concern for any organisation, regardless of its size and complexity. This is chiefly for two essential business reasons:\nA cybersecurity incident or breach could, depending on its severity, result in significant reputational or financial damage if not adequately safeguarded against or handled correctly. When judging whether to award a contract to a business for a critical function, the awarding organisation will typically need to assuage themselves of any risk associated with placing this activity \u0026ldquo;outside the garden fence\u0026rdquo;. Cybersecurity is one aspect of assessing this risk, usually focused towards understanding what controls, policies and procedures exist within a business to ensure that sensitive data is handled appropriately. Traditionally, to adequately demonstrate sufficient competence in this area, the ISO 27001 standard acts as a watermark to indicate that proper information security management systems are in place within a business. Many routes are currently available towards achieving this accreditation. Its adoption can involve many complicated and highly integrated business changes which, for smaller organisations, may prove to be a significant challenge to put in place - laying aside any cost implications.\nIn recognition of this fact and as a general acknowledgement towards the increased risk the \u0026ldquo;internet age\u0026rdquo; brings to supplier/customer relationships (particularly in the public sector), the UK Government launched the Cyber Essentials scheme back in June 2014. Aimed at organisations of any size, it promises to provide the necessary help and reassurance that your business/organisation has put the necessary steps in place to \u0026rsquo;\u0026hellip;protect\u0026hellip;against common online threats\u0026rsquo;, affording the opportunity to advertise this fact to all and sundry.\nI\u0026rsquo;ve been through the process of successfully attaining the standard within organisations over the past few years, so I wanted to share some of my thoughts relating to the scheme, alongside some tips to help you along the way if you are contemplating adopting the scheme in the near future.\nTo begin with, I wanted to provide a detailed overview of the scheme, with some reasons why it may be something your organisation should consider. Cyber Essentials is structured as a tiered scheme, with two certification levels available, which differ significantly in their level of rigorousness:\nCyber Essentials: Sometimes referred to as \u0026ldquo;Cyber Essentials Basic\u0026rdquo;, this level of the standard is designed to assess your current IT infrastructure and internal processes, via a self-assessment questionnaire. The answers are then reviewed and marked against the standard. Cyber Essentials +: Using the answers provided during the Basic accreditation process, a more thorough assessment is carried out on your network by an external organisation, taking the form of a mini-penetration test of your infrastructure. You can read in further detail on the scheme\u0026rsquo;s website regarding each level. It should be noted, even if it may go without saying, that you must be Cyber Essentials Basic accredited before you can apply for the + accreditation. Both tiers of the standard also require renewal annually.\nWhether your organisation needs the scheme or not depends on your industry focus and, in particular, your appetite for working within the public sector. As noted on the GOV.UK website:\nFrom 1 October 2014, Government requires all suppliers bidding for contracts involving the handling of certain sensitive and personal information to be certified against the Cyber Essentials scheme.\nIts requirement has also spread itself further from there into some areas of the private sector. For example, I have seen tenders/contracts in recent times explicitly asking for Cyber Essentials + as a minimum requirement for any suppliers. In short, you should be giving some thought towards the scheme if you do not have anything existing in place and if you have a desire to complete public sector work in the very near future.\nWhat You Can Expect The exact process will differ depending on which accreditation body you work with, but the outline process remains the same for both levels of the scheme:\nFor the Basic, you will be asked to complete and return answers to the self-assessment question list. Responses will then be scored based on a Red, Amber, Green (RAG) scoring system, with full justifications for each score provided. Depending on the number and severity of issues involved, an opportunity to implement any required changes and resubmit your answers may be given at no additional cost; otherwise, failure will mean that you will have to apply to complete the questionnaire again for an additional fee. Turnaround for completed responses has been relatively quick in my experience, with the upshot being that you could potentially get the accreditation in place within a few weeks or less. For those who may be worried about the contents of the questionnaire, the good news is that you can download a sample question list at any time to evaluate your organisation\u0026rsquo;s readiness. As hinted towards already, the + scheme is a lot more involved - and costly - to implement. You will be required to allow an information security consultant access to a representative sample of your IT network (including servers and PC/Mac endpoints), for both internal and external testing. The consultant will need to be given access to your premises to carry out this work, using a vulnerability assessment tool of their choosing. There will also be a requirement to evidence any system or process that you have attested towards as part of the Basic assessment (e.g. if you are using Microsoft Intune for Mobile Device Management, you may be required to export a report listing all supervised devices and demonstrate a currently supervised device). It is almost a certainty that there will be some remedial work that needs to take place resulting from any scan, most likely amounting to the installation of any missing security updates. Previously, you were granted a \u0026ldquo;reasonable\u0026rdquo; period to complete these actions; for 2018, the scheme now requires that all corrective actions are completed within 30 days of the on-site assessment taking place. Once this is done and evidenced accordingly, a final report will be sent, noting any additional observations, alongside confirmation of successfully attaining the + accreditation. Costs will vary, but if you are paying any more than £300 for the Basic or £1,500 + VAT for the + accreditation, then I would suggest you shop around. 🙂\nIs it worth it? As there is a cost associated towards all of this, there will need to a reasonable business justification to warrant this spend. The simple fact that you may now be required to contract with organisations who mandate this standard being in place is all the justification you may need, especially if the contract is of sufficiently high value. Or it could be that you wish to start working within the public sector. In both scenarios, the adoption of the standard seems like a no-brainer option if you can anticipate any work to be worth in excess of £2,000 each year.\nBeyond this, when judging the value of something, it is often best to consider the impact or positive change that it can bring to the table. Indeed, in my experience, I have been able to drive forward significant IT infrastructure investments off the back of adopting the scheme. Which is great\u0026hellip;but not so much from a cost standpoint. You, therefore, need to think carefully, based on what the standard is looking for, on any additional investment required to ensure compliance towards it. For example, if your organisation currently does not have Multi-Factor Authentication in place for all users, you will need to look at the license and time costs involved in rolling this out as part of your Cyber Essentials project. As mentioned already, ignorance is not an excuse, given that all questions are freely available for review, so you should ensure that this exercise is carried out before putting any money on the table.\nThe steps involved as part of the + assessment are, arguably, the best aspects of the scheme, given that you are getting an invaluable external perspective and vulnerability assessment at a rather healthy price point. Based on what I have witnessed, though, it would be good if this side of things was a little more in-depth, with additional auditing of answers from the Basic assessment, as I do feel that the scheme could be open to abuse as a consequence.\nA Few Additional Pointers The questions on the Basic self-assessment will generally be structured so that you can make a reasonable guess as to what the \u0026ldquo;right\u0026rdquo; answer should be. It is essential that the answers you give are reflective of current circumstances, especially if you wish to go for the + accreditation. If you find yourself lacking in specific areas, then go away and implement the necessary changes before submitting a completed self-assessment. Regular patching cycles are a key theme that crop up throughout Cyber Essentials, so as a minimum step, I would highly recommend that you implement the required processes to address this in advance of any + assessment. It will save you some running around as a consequence. Both assessments are also testing to ensure that you have a sufficiently robust Antivirus solution in place, particularly one that is automated to push out definition updates and - ideally - client updates when required. You should speak to your AV vendor before carrying out any Cyber Essentials assessment to verify that it supports this functionality, as it does help significantly in completing both the Basic and + assessment. An obligatory Microsoft plug here, but a lot of what is available on Office 365 can add significant value when looking at Cyber Essentials: Multi-Factor Authentication, as already discussed, will be needed for your user accounts. Exchange Advanced Threat Protection is particularly useful during the + assessment in providing validation that your organisation protects against malicious file attachments. Last but not least, a Microsoft 365 subscription facilitates a range of benefits, including, but not limited, the latest available version of a Windows operating system, BitLocker drive encryption and policy management features. If you are currently looking for assistance adopting the scheme, then please feel free to contact me, and I would be happy to discuss how to assist you towards attaining the standard.\n","date":"2018-08-19T00:00:00Z","image":"/images/CyberEssentials-FI.png","permalink":"/some-thoughts-on-the-cyber-essentials-scheme/","title":"Some Thoughts on the Cyber Essentials Scheme"},{"content":"UPDATE 02/09/2018: It turns out that there is a far better way of fixing this problem. Please click here to find out more.\nI thought I was losing my mind the other day. This feeling can be a general occurrence in the world of IT, when something completely random and unexplainable happens - emphasised even more so when you have a vivid recollection of something behaving in a particular way. In this specific case, a colleague was asking why they could no longer access the list of Workflows setup within a version 8.2 Dynamics 365 Customer Engagement (D365CE) Online instance via the Settings area of the system. Longstanding CRM or D365CE professionals will recall that this has been a mainstay of the application since Dynamics CRM 2015, accessible via the Settings -\u0026gt; Processes group Sitemap area:\nSuffice to say, when I logged on to the affected instance, I was thoroughly stumped, as this area had indeed vanished entirely:\nI asked around the relatively small pool of colleagues who a) had access to this instance and b) had knowledge of modifying the sitemap area (more on this shortly). The short answer, I discovered, was that no one had any clue as to why this area had suddenly vanished. It was then that I came upon the following Dynamics 365 Community forum post, which seemed to confirm my initial suspicions; namely, that something must have happened behind the scenes with Microsoft or as part of an update that removed the Processes area from the SiteMap. Based on the timings of the posts, this would appear to be a relatively recent phenomenon and one that can be straightforwardly fixed\u0026hellip;if you know how to. 😉\nFor those who are unfamiliar with how SiteMaps work within the application, these are effectively XML files that sit behind the scenes, defining how the navigation components in CRM/ D365CE operate. They tell the application which of the various Entities, Settings, Dashboards and other custom solution elements that need to be displayed to end users. The great thing is that this XML can be readily exported from the application and modified to suit a wide range of business scenarios, such as:\nOnly make a specific SiteMap area available to users who are part of the Sales Manager Security Role. Override the default label for the Leads SiteMap area to read Sales Prospect instead. Link to external applications, websites or custom developed Web Resources. What this all means is that there is a way to fix the issue described earlier in the post and, even better, the steps involved are very straightforward. This is all helped by quite possibly the best application that all D365CE professionals should have within their arsenal - the XrmToolBox. With the help of a specific component that this solution provides, alongside a reliable text editor program, the potentially laborious process of fiddling around with XML files and the whole export/import process can become streamlined so that anybody can achieve wizard-like ability in tailoring the applications SiteMap. With all this in mind, let\u0026rsquo;s take a look on how to fix the above issue, step by step:\nDownload and run XrmToolbox and select the SiteMap Editor app, logging into your CRM/D365CE instance when prompted: After logging in, you should be greeted with a screen similar to the below:\nClick on the Load SiteMap button to load the SiteMap definition for the instance you are connected to. Once loaded, click on the Save SiteMap button, saving the file with an appropriate name on an accessible location on your local computer. Open the file using your chosen text editor, applying any relevant formatting settings to assist you in the steps that follow. Use the Find function (CTRL + F) to find the Group with the node value of Customizations. It should look similar to the image below, with the Group System_Setting specified as the next one after it: Copy and paste the following text just after the \u0026lt;/Group\u0026gt; node (i.e. Line 415): \u0026lt;Group Id=\u0026#34;ProcessCenter\u0026#34; IsProfile=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;Titles\u0026gt; \u0026lt;Title LCID=\u0026#34;1033\u0026#34; Title=\u0026#34;Processes\u0026#34; /\u0026gt; \u0026lt;/Titles\u0026gt; \u0026lt;SubArea Entity=\u0026#34;workflow\u0026#34; GetStartedPanePath=\u0026#34;Workflows_Web_User_Visor.html\u0026#34; GetStartedPanePathAdmin=\u0026#34;Workflows_Web_Admin_Visor.html\u0026#34; GetStartedPanePathAdminOutlook=\u0026#34;Workflows_Outlook_Admin_Visor.html\u0026#34; GetStartedPanePathOutlook=\u0026#34;Workflows_Outlook_User_Visor.html\u0026#34; Id=\u0026#34;nav_workflow\u0026#34; AvailableOffline=\u0026#34;false\u0026#34; PassParams=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;Titles\u0026gt; \u0026lt;Title LCID=\u0026#34;1033\u0026#34; Title=\u0026#34;Workflows\u0026#34; /\u0026gt; \u0026lt;/Titles\u0026gt; \u0026lt;/SubArea\u0026gt; \u0026lt;/Group\u0026gt; It should resemble the below if done correctly:\nSave a copy of your updated Sitemap XML file and go back to the XrmToolbox, selecting the Open SiteMap button. This will let you import the modified, copied XML file back into the Toolbox, ready for uploading back onto CRM/D365CE. At this stage, you can verify the SiteMap structure of the node by expanding the appropriate area within the main SiteMap window: When you are ready, click on the Update SiteMap button and wait until the changes are uploaded/published into the application. You can then log onto CRM/D365CE to verify that the new area has appeared successfully. Remember when I said to save a copy of the SiteMap XML? At this stage, if the application throws an error, then you can follow the steps above to reimport the original SiteMap to how it was before the change, thereby allowing you to diagnose any issues with the XML safely.\nIt is still a bit of mystery precisely what caused the original SiteMap area for Processes to go walkies. The evidence would suggest that some change by Microsoft forced its removal and that this occurred not necessarily as part of a major version update (the instance in our scenario has not been updated to a major release for 18 months at least, and this area was definitely there at some stage last year). One of the accepted truths with any cloud CRM system is that you at the mercy of the solution vendor, ultimately, if they decide to modify things in the background with little or no notice. The great benefit in respect to this situation is that, when you consider the vast array of customisation and development options afforded to us, CRM/D365CE can be very quickly tweaked to resolve cases like this, and you do not find yourself at the mercy of operating a business system where your bespoke development options are severely curtailed.\n","date":"2018-08-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-mystery-of-the-missing-workflow-sitemap-area-dynamics-365-customer-engagement/","title":"The Mystery of the Missing Workflow Sitemap Area (Dynamics 365 Customer Engagement)"},{"content":"SQL Server Integration Services (SSIS) package execution can always throw up a few spanners, particularly when it comes to the task of deploying packages out to a SQL Server SSISDB catalog - principally, a specialised database for the storage of .dtsx packages, execution settings and other handy profile info to assist with automation. Problems can generally start creeping if you decide to utilise non-standard connectors for your package data sources. For example, instead of employing the more oft utilised Flat File Connection Manager for .csv file interaction, there may be a requirement to use the Excel Connection Manager instead. While I would generally favour the latter data Connection Manager where possible, the need to handle .xlsx file inputs (and to output into this file format) comes up more often than you might think. Bearing this in mind, it is, therefore, always necessary to consider the impact that deploying out what I would term a \u0026ldquo;non-standard Connection Manager\u0026rdquo; (i.e. a non-Flat File Connection Manager) can have for your package release. Further, you should give some serious thought towards any appropriate steps that may need to be taken within your Production environment to ensure a successful deployment.\nWith all of this in mind, you may encounter the following error message when deploying out a package that utilises the ADO.NET Connector for MySQL - a convenient driver released by Oracle that lets you connect straightforwardly with MySQL Server instances, à la the default SQL Server ADO.NET connector:\nError: Microsoft.SqlServer.Dts.Runtime.DtsCouldNotCreateManagedConnectionException: Could not create a managed connection manager. at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection\nSpecifically, this error will appear when first attempting to execute your package within your Production environment. The good news is that the reason for this error - and its resolution - can be easily explained and, with minimal effort, resolved.\nThe reason why this error may be happening is that the appropriate ADO.NET MySQL driver is missing from your target SSISDB server. There is no mechanism for the proper dependent components to be transported as part of deploying a package to a catalog, meaning that we have to resort to downloading and installing the appropriate driver on the server that is executing the packages to resolve the error. Sometimes, as part of long development cycles, this critical step can be overlooked by the package developer. Or, it could be that a different individual/team that is responsible for managing deployments are not necessarily well-briefed ahead of time on any additional requirements or dependencies needed as part of a release.\nFor this particular example, getting things resolved is as simple as downloading and installing onto the SSISDB Server the latest version of the MySQL Connector Net drivers that can be found on the link below:\nMySQL Connector/NET 8.0\nIf you find yourself in the same situation not involving the above Data Connector, then your best bet is to interrogate the package in question further and identify the appropriate drivers that are needed.\nNow, the key thing to remember about all of this is that the driver version on the client development machine and the SSISDB server needs to be precisely the same. Otherwise, you will more than likely get another error message generated on package execution, resembling this:\nCould not load file or assembly \u0026lsquo;MySql.Data, Version=6.10.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d\u0026rsquo; or one of its dependencies. The located assembly\u0026rsquo;s manifest definition does not match the assembly reference.\nIn which case, you will need to resolve the version conflict, ideally by assuring that both machines are running the latest version of the corresponding driver. An uninstall, and server reboot could be necessary at this juncture, so be sure to tread cautiously.\nSSIS development can often feel like a protracted, but ultimately worthwhile, process. With this in mind, it is natural to expect some bump in the roads and for potentially essential steps to be overlooked, particularly in larger organisations or for more complex deployments. Putting appropriate thought towards release management notes and even dedicated testing environments for deployments can help to mitigate the problem that this post references, ensuring a smooth pathway towards a prosperous, error-free release 🙂\n","date":"2018-08-05T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/could-not-create-a-managed-connection-manager-error-message-on-ssisdb-dtsx-package-execution/","title":"'Could not create a managed connection manager' Error Message on SSISDB .dtsx Package Execution"},{"content":"With two major Microsoft events recently taking place back to back over the last fortnight - Microsoft Inspire \u0026amp; the Business Applications Summit - there is, understandably, a plethora of major new announcements that concern those of us who are working in the Business Applications space today. The critical announcement from my perspective is the October 2018 Business Application Release Notes, which gives us all a nice and early look at what is going to be released soon for Dynamics 365, Microsoft Flow, PowerApps, Power BI and other related services. Unlike previous Spring or Fall releases, the sheer breadth of different features that now sit within the Business Applications space makes it all the more important to consider any new announcement carefully and to ensure that they are adequately factored into any architectural decisions in months ahead. If you are having trouble wading through all 239 pages of the document, then I have been through the notes and picked out what I feel are most relevant highlights from a Dynamics CRM/Dynamics 365 Customer Engagement (D365CE) perspective and their potential impact or applicability to business scenarios.\nSharePoint Integration with Portals This is a biggie and a feature that no doubt many portal administrators have been clamouring for, with the only other option being a complicated SDK solution or a third-party vendor approach. Document management directly within CRM/D365CE has always been a sketchy idea at best when you consider the database size limitations of the application and the cost for additional database storage. That\u0026rsquo;s why SharePoint has always represented the optimal choice for storing any documents related to a record, facilitating a much more inexpensive route and affording opportunities to take advantage of the vast array of SharePoint features. When you start adding portals into the mix - for example, to enable customers to upload documents relating to a loan application - the whole thing currently falls flat on its face, as documents (to the best of my knowledge) can only be uploaded and stored directly within CRM/D365CE. With the removal of this feature, a significant adoption barrier for CRM Portals will be eliminated, and I am pleased to also see an obligatory Power BI reference included as part of this announcement 🙂\nIn addition, we are providing the ability to embed Power BI charts within a portal, allowing users to benefit from the interactive visualizations of Power BI.\nPortal Configuration Migration Another process that can regularly feel disjointed and laborious are the steps involved in deploying Portal changes from Dev -\u0026gt; UAT/Test -\u0026gt; Production environments, with no straightforward means of packaging up changes via a Solution or similar for easy transportation. This torment promises to change as part of the release in October, thanks to the following:\nTo reduce the time and effort required to manage portal configuration across environments, we are publishing schema for configuration migration that works with the Configuration Migration SDK tool.\nIf you are not aware of the Configuration Migration tool, then you owe it to yourself to find out more about what it can accomplish, as I am sure it will take a lot of headache out of everyday business settings, product catalogue or other non-solution customisation activity that you may be carrying out in multiple environments. The neat thing about this particular announcement is that an existing, well-established tool can be used to achieve these new requirements, as opposed to an entirely new, unfamiliar mechanism. Integration with the current Configuration Migration tool will surely help in adopting this solution more quickly and enable deployment profiles to be put together that contain nearly all required configuration data for migration.\nPortal Access Restrictions In Portal terms, this is a relatively minor one, but a welcome addition nonetheless. When testing and developing any software application, it is always prudent to restrict access to only the users or organisations who require access to it. This option has not been available to Portals to date, but no longer thanks to the following announcement:\nThis feature would allow administrators to define a list of IP addresses that are allowed to access your portal. The allow list can include individual IP addresses or a range of IP addresses defined by a subnet mask. When a request to the portal is generated from any user, their IP address is evaluated against the allow list. If the IP address is not in the list, the portal replies with an HTTP 403 status code\nThe capabilities exposed here demonstrate a lot of parity with Azure Web Apps, which is, I understand, what is used to host portals. I would hope that we can see the exposure of more Azure Web App configuration features for portal administrators in the years ahead.\nMulti-resource Scheduling There has been a real drive in getting the Resource Scheduling experience within D365CE looking as visually optimal and feature-rich as possible in recent years. There is a specific reason to explain this - the introduction of Project Service Automation and Field Service capability requires this as an almost mandatory pre-requisite. There is a wide array of new features relating to resource scheduling as part of this update, but the announcement that caught my eye, in particular, was the ability to group related resources on the Resource Scheduler, as predefined \u0026ldquo;crews\u0026rdquo;. This new feature is hugely welcome for many reasons:\nDifferent types of jobs/work may require resources with a specific set of skills in combination to complete. It may be prudent to group specific resources if, for example, previous experience tells you that they work well together. Location may be a factor as part of all this, meaning that by scheduling a \u0026ldquo;crew\u0026rdquo; of resources together within the same locale, you can reduce the unnecessary effort involved in travelling and ensure your resources are utilising their time more effectively. The release notes give us a teaser of how this will look, and I am eager to see how this works in practice:\nLeave and absence management in Dynamics 365 Talent I have been watching with casual, distant interest how the Dynamics 365 Talent product has been developing since its release, billed as one of the first applications built on top of the new Unified Interface/Common Data Service experience. I have noted its primary utility to date has been more towards the Human Resources hiring and onboarding process, with a significant feature gap that other HR systems on the market today would more than happily fill, by providing central hubs for policy documents, managing personal information and leave requests. I think there may be a recognition of this fact within Microsoft, which explains the range of new features contained within Dynamics 365 Talent as part of the October 2018 release. The new feature that best epitomises the applications maturity is the ability to manage leaves and absences, noted as follows:\nOrganizations can configure rules and policies related to their leave and absence plans. They can choose how employees accrue their time off, whether it\u0026rsquo;s by years of service or by hours worked. They also can configure when this time off can be taken and if certain types of time off must be taken before others. If they allow employees to get a pay-out of their time off, this can be configured as well.\nManagers can see an all-up calendar view of their team members\u0026rsquo; time off as well as company holidays and closures. This view shows them where they may have overlap as well as time-off trends for their team and enables them to drill down to gain a better understanding of an individual\u0026rsquo;s time off.\nThis immediately places the system as a possible challenger to other HR systems and represents a natural, and much needed, coming-of-age development for the system. I would undoubtedly say that Dynamics 365 Talent is starting to become something that warrants much closer attention in future.\nDevelop Microsoft Flows Using Visio Microsoft Flow is great. This fact should be self-evident to regular followers of the blog. As a regularly developing, relatively young product, though, it is understandable that some aspects of it require further work. An excellent example of this is the ability to manage the deployment of Flows between different environments or stages. While Flows big brother, Microsoft Logic Apps, has this pretty well covered, the ability to deploy development or concepts Flows repeatedly often ends up being a case of manually creating each Flow again from scratch, which isn\u0026rsquo;t exactly fun.\nThe October release promises to change this with the introduction of a specific piece of integration with Microsoft Visio:\nMicrosoft Visio enables enterprises to capture their business processes using its rich modeling capabilities. Anyone who creates flowcharts or SharePoint workflows can now use Visio to design Microsoft Flow workflows. You can use Visio\u0026rsquo;s sharing and commenting capabilities to collaborate with multiple stakeholders and arrive at a complete workflow in little time. As requested here, you can publish the workflow to Microsoft Flow, then supply parameters to activate it.\nThis feature will be available to Visio Online Plan 2 subscription users. Office Insiders can expect early access in July 2018. In the future, you\u0026rsquo;ll also be able to export existing Flows and modify them in Visio.\nNow, it\u0026rsquo;s worth noting, in particular, the requirement for Visio Online Plan 2 to accommodate this neat piece of functionality. But, assuming this is not an issue for your organisation, the potential here to define Flows locally, share them quickly for approval, and deploy them en masse is enormous, bringing a much-needed degree of automation to a product that currently does not support this. I\u0026rsquo;m looking forward to getting my hands on this in due course.\nCustom Fonts in Power BI Continuing the theme of obligatory Power BI references, my final pick has to be the introduction of Custom Fonts into Power BI, which will be in Public Preview as part of October\u0026rsquo;s release:\nCorporate themes often include specific fonts that are distributed and used throughout the company. You can use those fonts in your Power BI reports.\nFor any font property, Power BI Desktop will show a complete list of all the fonts installed on your computer. You can choose from these to use in your report. When distributing the report, anyone with the font installed will see it reflected in the report. If the end user doesn\u0026rsquo;t have it installed, it falls back to the default font.\nFor those who have particular branding requirements that require accommodation within their Power BI Reports, this new feature completes the puzzle and takes you an additional step further in transforming your reports so that they are almost unrecognisable from a default Power BI Report. Hopefully, the preview period for this new feature will be relatively short and then rolled out as part of general availability.\nConclusions or Wot I Think The list above is just a flavour of my \u0026ldquo;choice cuts\u0026rdquo; of the most exciting features that will be in our hands within the next few months, and I really would urge you to read through the entire document if you have even just a little passing interest in any of the technologies included in these release notes. As you can tell, my list is ever so skewered towards Portals out of everything else. This is for a good reason - ever since Microsoft\u0026rsquo;s acquisition of ADXStudio a few years back, we have seen some progress in the development of CRM Portals from Microsoft, mainly in the context of integrating the product more tightly for Online users. In my view, this has been the only significant effort we have seen in taking the product forward, with a relatively extensive list of backlog feature requests that looked to have been consigned to the recycling bin. The October Release very much seems to flip this on its head and I am pleased to discover a whole range of new, most clamoured for, features being made available on Portals, which take the product forward in strides and enables organisations to more easily contemplate their introduction.\nAs you will probably expect based on where things are going in the D365CE space at the moment, the announcements for Flow, PowerApps and the Common Data Service are all very much framed towards the end goal of integrating these and the \u0026ldquo;old\u0026rdquo; CRM/D365CE experience together as tightly as possible, a change that should be welcomed. The release notes are also crucial in highlighting the importance of anyone working in this space to be as multi-skilled as possible from a technology standpoint. Microsoft is (quite rightly) encouraging all technology professionals to be fast and reactive to change, and anticipating us to have a diverse range of skills to help the organisations/businesses we work with every day. There is no point in fighting this and, the best way for you to succeed in this climate is to identify the relevant opportunities that you can drive forward from these product announcements and proactively implement as part of the work you are doing each day. In a nutshell, you should know how to deploy a Power BI Dashboard, have familiarity with the type of services that Flow connects to, see the difference between a Canvas and Model-driven PowerApps and - amongst all of this - understand how D365CE solutions operate. Be a Swiss Army Knife as much as possible and deliver as much value and benefit in your role as you possibly can.\n","date":"2018-07-29T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/my-highlights-from-the-october-2018-business-application-release-notes/","title":"My Highlights from the October 2018 Business Application Release Notes"},{"content":"Earlier this year, the Business Applications team at Microsoft published a blog post titled Modernizing the way we update Dynamics 365, a significant article that anyone involved with Dynamics 365 Customer Engagement (D365CE) should take time to read through carefully. Indeed, as a direct consequence of the announcements contained in this post, you may now be receiving emails similar to the below if you are an administrator of a D365CE instance:\nChanges to well-established processes always can produce a mixture of questions, confusion and, in some cases, frustration for IT teams. Once you have fully understood the broader context of where D365CE is going and also the general sea change that has been occurring since Satya Nadella came to the helm of Microsoft, the modifications to the Update Policy are welcome and, arguably, necessary to ensure that D365CE users and administrators can take advantage of the different features available within a D365CE subscription today. For those who are still scratching their head at all of this, what follows is a summary of the most significant changes announced, along with some additional observations from me on why it is important to embrace all these changes wholeheartedly.\nVersion 9 or Bust Longstanding D365CE online customers will be used to the regular update cycles and the ability to defer significant application updates for a period. While this can be prudent for more complex deployments, it does potentially lead to additional overhead in the long term, mainly if Microsoft were ever to force this decision upon you. The well-established advice has always been to proactively manage your updates at your own pace, ideally targeting at least one major update a year. If you haven\u0026rsquo;t been doing this, then you may now be in for a particularly nasty shock. As mentioned in the article:\nSince every customer will be updated on the continuous delivery schedule, your organization needs to update to the latest version if you are running an older version of Dynamics 365\u0026hellip;For customers who are currently running older versions of Dynamics 365, we will continue to provide you with the ability to schedule an update to the latest version and want to make sure this effort is as seamless as possible through continuous improvements in our update engine\u0026hellip;For Dynamics 365 (online) customer engagement applications, we sent update communications in May to all customers running v8.1 and have scheduled updates. Customers running v8.2 should plan to update to the latest version by January 31, 2019.\nThis point is reinforced in a much more explicit manner in the email above:\nACTION NEEDED: Schedule an update for your organization by August 16, 2018. The date for the update should be on or before January 31, 2019. You can find instructions on how to schedule and approve updates here.\nIf you do not schedule an update in the timeframe mentioned above, Microsoft will schedule an automatic update for your organization on August 17, 2018 and communicate the dates. The automatic update would take place during your normal maintenance window.\nThe implications should be clear, and it certainly seems that, in this scenario, Microsoft has decided to eliminate any degree of upgrade flexibility for its customers.\nNo Changes to Minor/Major Updates? Again, if you are familiar with how D365CE Online operates, there are two flavours of updates:\nMinor updates, to address bugs, performance and stability issues, are continually pushed out \u0026ldquo;behind the scenes\u0026rdquo;. You have no control over when and how these are applied, but they will always be carried out outside your regions regular business hours. The Office 365 Administrator Portal is your go-to place to view any past or upcoming minor updates. Major updates generally referred to as Spring Wave or Fall Update releases. There has always been two of these each year, and administrators can choose when to apply these to a D365CE instance. These updates can generally take much longer to complete but will introduce significant new features. Microsoft\u0026rsquo;s new Update Policy seems to leave this convention intact, with a noteworthy change highlighted below in bold:\nWe are transforming how we do service updates for Dynamics 365 (online). We will deliver two major releases per year – April and October – offering new capabilities and functionality. These updates will be backward compatible so your apps and customizations will continue to work post update. New features with major, disruptive changes to the user experience are off by default. This means administrators will be able to first test before enabling these features for their organization.\nIn addition to the two major updates, we will continue to deploy regular performance and reliability improvement updates throughout the year. We are phasing deployments over several weeks following safe deployment practices and monitoring updates closely for any issues.\nSome additional detail around this will be welcome to determine its effectiveness, but I can imagine some parity with the Experimental Features area in PowerApps, which - contrary to the above - will often introduce new features that are left on by default. A derived version of this feature would, I think, work in practice and hopefully streamline the process of testing new functionality without necessarily introducing it unintended into Production environments.\nOn-Premise Implications One question that all of this may raise is around the on-premise version of the application, in particular for those who consume online subscriptions, but use their dual-usage rights to create an on-premise instance instead. This situation becomes more pressing when you consider the following excerpt from the refreshed Update Policy:\nDynamics 365 (Online) version 8.2 will be fully supported until January 31, 2019. Customers running version 8.2 should plan to update to the latest version prior to this date.\nNow, the important thing to stress is the fact that the above quotation makes explicit reference to Online as opposed to on-premise. Also, when we check Microsoft\u0026rsquo;s product lifecycle page, you can see that Mainstream support for this product ends in January 2021. On-premise administrators can, I would suggest, breath a sigh of relief for now, but I would urge you to contact Microsoft to clarify your support arrangements. I think as an organisation as well, you should also start seriously asking yourself the following questions:\nIs an online, Software as a Service (SaaS) version of the application going to be easier to maintain compared with dedicated server environment(s)? Is it possible to achieve all of your required functionality and business requirements using the Online version of the application? Do you want to ensure you have the latest features exposed to you and can take advantage of Online-only functionality, such as Export to Excel Online? If the answer to all of the above questions is \u0026ldquo;Yes\u0026rdquo;, then a migration to the Online version of the application would be my recommended course of action, as it wouldn\u0026rsquo;t surprise me if Microsoft were to stop releasing new versions/service packs for the on-premise version of the product or eliminate it by providing inexpensive sandbox instance options.\nRecommended Next Steps The fundamental aim of this move is a housekeeping exercise for Microsoft. The announcement earlier this year of version 2 of the Common Data Service - which is utilising the existing D365CE SQL database for all customisations - is the key driver behind a lot of the changes that are happing in the CRM/D365CE space today. The focus for the product team at Microsoft currently appears to be towards knitting together both experiences into the PowerApps interface. What this means in practice is that the traditional customisation experience is going to slowly fade away, to be replaced by Model-Driven App development instead. This refresh is excellent for several reasons - it provides a much-needed interface update, while also exposing additional functionality to us when creating business applications - but it is evident that such a massive change will require a consistent playing field for all of Microsoft\u0026rsquo;s existing version 8.2 and below D365CE customers. Getting everyone onto version 9 of the application is the apparent result towards rolling out version 2 of the Common Data Service for all existing customers while ensuring that D365CE can fit into the mould of other application release cycles across Microsoft today. Embracing the change should not be a difficult thing to do and, when you understand the broader context, there is no other option available on the table.\nSo what are the key takeaways from this that you should be thinking about in the weeks and months ahead? My suggested list would include the following:\nSchedule your update to version 9 of the application manually well in advance of August 16th 2018. DO NOT put yourself in a position where you are having an update forced upon you and give yourself the amount of time needed to successfully plan and test your upgrade in good time before January 31st 2019. I would also anticipate upgrade slots may start to fill up fast if you want to wait until as late as possible too 🙂 Start considering your future strategy in regards to the on-premise version of the application, if you are still supporting these environments. I speak with literally zero authority here, but I would not be surprised if the on-premise version of the application receives no further update at all in future or that dual-usage rights get revoked entirely. Get familiar with the Common Data Service and Power Apps, as this is increasingly going to be the go-to area D365CE development and administration in the future. If you get the opportunity to attend one of Microsoft\u0026rsquo;s PowerApp in Day course, then be sure to go along without any hesitation. I would also be happy to speak to and help anyone with training in this area. As with anything in life, embrace change, be proactive and identify areas of opportunity from this. A good one from my perspective is the potential to more easily introduce the staggering array of differing Business Application functionality, with the outcome being the ability to quickly deploy bespoke business applications that achieve any possible requirement and integrate with a wide variety of different services or datasets. ","date":"2018-07-22T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/modernizing-dynamics-365-updates-what-this-means-for-your-organisation/","title":"Modernizing Dynamics 365 Updates: What This Means For Your Organisation"},{"content":"On February 10th 2015, Microsoft published Security Bulletin MS15-011, which detailed a recently discovered critical flaw in every major version of Windows from Server 2003 right the way through to Windows 8.1. The flaw, relating to how Group Policy handles data, potentially allows:\n\u0026hellip;remote code execution if an attacker convinces a user with a domain-configured system to connect to an attacker-controlled network. An attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.\nMicrosoft was quick to release a corresponding security patch via the Windows Update service and each corresponding Operating System update can be downloaded and installed via the links below\nMS15-011: Windows Vista Service Pack 2 MS15-011: Windows Vista x64 Edition Service Pack 2 MS15-011: Windows Server 2008 for 32-bit Systems Service Pack 2 MS15-011: Windows Server 2008 for x64-based Systems Service Pack 2 MS15-011: Windows Server 2008 for Itanium-based Systems Service Pack 2 MS15-011: Windows 7 for 32-bit Systems Service Pack 1 MS15-011: Windows 7 for x64-based Systems Service Pack 1 MS15-011: Windows Server 2008 R2 for x64-based Systems Service Pack 1 MS15-011: Windows Server 2008 R2 for Itanium-based Systems Service Pack 1 MS15-011: Windows 8 for 32-bit Systems MS15-011: Windows 8 for x64-based Systems MS15-011: Windows 8.1 for 32-bit Systems MS15-011: Windows 8.1 for x64-based Systems MS15-011: Windows Server 2012 MS15-011: Windows Server 2012 R2 MS15-011: Windows Server 2008 for 32-bit Systems Service Pack 2 MS15-011: Windows Server 2008 for x64-based Systems Service Pack 2 MS15-011: Windows Server 2008 R2 for x64-based Systems Service Pack 1 MS15-011: Windows Server 2012 MS15-011: Windows Server 2012 R2 Now, you may be asking at this point, why are you posting about this now in mid-2018? Surely the vulnerability has been addressed on all affected systems that are patched regularly? Well, as I found out very recently, this security patch is one of many released by Microsoft that requires additional administrator intervention after installation to ensure that the exploit hole is properly filled. The modifications required can be completed either via the Local Group Policy Editor for single machines not joined to a domain or Group Policy Management Console from a domain controller. The second option is naturally preferred if you are managing a large estate of Windows machines. Below are summarised steps that should provide the appropriate guidance on applying the fix for both environments\nNavigate to the Management Editor and expand and open up the Computer Configuration/Policies/Administrative Templates/Network/Network Provider folder path. On the right-hand pane, you should see an item called Hardened UNC Paths, marked in a state of Not configured. Click on it to open its properties There are then a couple of steps that need to be completed on the pop-up window that appears: Ensure that the Enabled box is selected. In the Options tab, scroll down to the Show\u0026hellip; button and press it. The options at this stage depend upon your specific environment. For example, let\u0026rsquo;s assume that you have a domain with a file server called MyServer, which is configured for shared access. The most appropriate option, in this case, would be a Value name of \\\\MyServer\\* with a Value of RequireMutualAuthentication=1, RequireIntegrity=1. Another example scenario could be that multiple Servers are used for sharing out access out to a share called Company. In this case, you could use the Value name option of \\\\*\\Company with a Value of RequireMutualAuthentication=1, RequireIntegrity=1. Both of these examples are reproduced in the screenshot below, for your reference. Press the OK button to confirm the UNC path fields and Apply to make the policy change. The final step will be to enforce a group policy refresh on the target machine and any others on the domain. This can be done by executing the gpupdate /force Command Prompt cmdlet and confirming that no errors are generated in the output. And that\u0026rsquo;s it! Your Windows domain/computer should now be properly hardened against the vulnerability 🙂\nThis whole example represents an excellent case study on the importance of regularly reviewing security bulletins or announcements from Microsoft. The process of carrying out Windows updates can often become one of those thankless tasks that can grind the gears of even the most ardent server administrators. With this in mind, it can be expected that a degree of apathy or lack of awareness regarding the context for certain updates can creep in, leading to situations where issues like this only get flagged up during a security audit or similar. I would strongly urge anyone who is still running one or all of the above Operating Systems to check their group policy configuration as soon as possible to verify that the required changes indicated in this post have been applied.\n","date":"2018-07-15T00:00:00Z","image":"/images/WindowsServer-FI.png","permalink":"/microsoft-security-bulletin-ms15-011-and-the-group-policy-fix-you-may-have-missed/","title":"Microsoft Security Bulletin MS15-011 and the Group Policy Fix You May Have Missed"},{"content":"Once upon a time, there was a new cloud service known as Windows Azure. Over time, this cloud service developed with new features, became known more generally as just Azure, embraced the unthinkable from a technology standpoint and also went through a complete platform overhaul. Longstanding Azure users will remember the \u0026ldquo;classic\u0026rdquo; portal, with its very\u0026hellip;distinctive\u0026hellip;user interface. As the range of different services offered on Azure increased and the call for more efficient management tools became almost deafening, Microsoft announced the introduction of a new portal experience and Resource Group Management for Azure resources, both of which are now the de facto means of interacting with Azure today. The old style portal indicated above was officially discontinued earlier this year. In line with these changes, Microsoft introduced new, Resource Manager compatible versions of pretty much every major service available on the \u0026ldquo;classic\u0026rdquo; portal\u0026hellip;with some notable exceptions. The following \u0026ldquo;classic\u0026rdquo; resources can still be created and worked with today using the new Azure portal:\nThis provides accommodation for those who are still operating compute resources dating back to the days of yonder, allowing you to create and manage resources that may be needed to ensure the continued success of your existing application deployment. In most cases, you will not want to create these \u0026ldquo;classic\u0026rdquo; resources as part of new project work, as the equivalent Resource Manager options should be more than sufficient for your needs. The only question mark around this concerns Cloud Services. There is no equivalent Resource Manager resource available currently, with the recommended option for new deployments being Azure Service Fabric instead. Based on my research online, there appears to be quite a feature breadth between both offerings, with Azure Service Fabric arguably being overkill for more simplistic requirements. There also appears to be some uncertainty over whether Cloud Services are technically considered deprecated or not. I would highly recommend reading Andreas Helland\u0026rsquo;s blog post on the subject and form your own opinion from there.\nFor both experiences, Microsoft provided a full set of automation tools in PowerShell to help developers carry out common tasks on the Azure Portal. These are split out into the standard Azure cmdlets for the \u0026ldquo;classic\u0026rdquo; experience and a set of AzureRM cmdlets for the new Resource Management approach. Although the \u0026ldquo;classic\u0026rdquo; Azure resource cmdlets are still available and supported, they very much operate in isolation - that is, if you have a requirement to interchangeably create \u0026ldquo;classic\u0026rdquo; and Resource Manager resources as part of the same script file, then you are going to encounter some major difficulties and errors. One example of this is that the ability to switch subscriptions that you have access, but not ownership, to becomes nigh on impossible to achieve. For this reason, I would recommend utilising AzureRM cmdlets solely if you ever have a requirement to create classic resources to maintain an existing deployment. To help accommodate this scenario, the New-AzureRmResource cmdlet really becomes your best friend. In a nutshell, it lets you create any Azure Resource of your choosing when executed. The catch around using it is that the exact syntax to utilise as part of the -ResourceType parameter can take some time to discover, particularly in the case of working with \u0026ldquo;classic\u0026rdquo; resources. What follows are some code snippets that, hopefully, provide you with a working set of cmdlets to create the \u0026ldquo;classic\u0026rdquo; resources highlighted in the screenshot above.\nBefore you begin\u0026hellip; To use any of the cmdlets that follow, make sure you have connected to Azure, selected your target subscription and have a Resource Group created to store your resources using the cmdlets below. You can obtain your Subscription ID by navigating to its properties within the Azure portal:\n#Replace the parameter values below to suit your requirements $subscriptionID = 36ef0d35-2775-40f7-b3a1-970a4c23eca2 $rgName = \u0026#39;MyResourceGroup\u0026#39; $location = \u0026#39;UK South\u0026#39; Set-ExecutionPolicy Unrestricted Login-AzureRmAccount Set-AzureRmContext -SubscriptionId $subscriptionID #Create Resource Group New-AzureRMResourceGroup -Name $rgName -Location $location With this done, you should hopefully encounter no problems executing the cmdlets that follow.\nCloud Services (classic) #Create an empty Cloud Service (classic) resource in MyResourceGroup in the UK South region New-AzureRmResource -ResourceName \u0026#39;MyClassicCloudService\u0026#39; -ResourceGroupName $rgName ` -ResourceType \u0026#39;Microsoft.ClassicCompute/domainNames\u0026#39; -Location $location -Force Disks (classic) #Create a Disk (classic) resource using a Linux operating system in MyResourceGroup in the UK South region. #Needs a valid VHD in a compatible storage account to work correctly New-AzureRmResource -ResourceName \u0026#39;MyClassicDisk\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicStorage/storageaccounts/disks\u0026#39; ` -Location $location ` -PropertyObject @{\u0026#39;DiskName\u0026#39;=\u0026#39;MyClassicDisk\u0026#39; \u0026#39;Label\u0026#39;=\u0026#39;My Classic Disk\u0026#39; \u0026#39;VhdUri\u0026#39;=\u0026#39;https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd\u0026#39; \u0026#39;OperatingSystem\u0026#39; = \u0026#39;Linux\u0026#39; } -Force Network Security groups (classic) #Create a Network Security Group (classic) resource in MyResourceGroup in the UK South region. New-AzureRmResource -ResourceName \u0026#39;MyNSG\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicNetwork/networkSecurityGroups\u0026#39; ` -Location $location -Force Reserved IP Addresses (classic) #Create a Reserved IP (classic) resource in MyResourceGroup in the UK South region. New-AzureRmResource -ResourceName \u0026#39;MyReservedIP\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicNetwork/reservedIps\u0026#39; ` -Location $location -Force Storage Accounts (classic) #Create a Storage Account (classic) resource in MyResourceGroup in the UK South region. #Storage account with use Standard Locally Redundant Storage New-AzureRmResource -ResourceName \u0026#39;MyStorageAccount\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicStorage/StorageAccounts\u0026#39; ` -Location $location -PropertyObject @{\u0026#39;AccountType\u0026#39; = \u0026#39;Standard-LRS\u0026#39;} -Force Virtual Networks (classic) #Create a Virtual Network (classic) resource in MyResourceGroup in the UK South Region New-AzureRmResource -ResourceName \u0026#39;MyVNET\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicNetwork/virtualNetworks\u0026#39; ` -Location $location -PropertyObject @{\u0026#39;AddressSpace\u0026#39; = @{\u0026#39;AddressPrefixes\u0026#39; = \u0026#39;10.0.0.0/16\u0026#39;} \u0026#39;Subnets\u0026#39; = @{\u0026#39;name\u0026#39; = \u0026#39;MySubnet\u0026#39; \u0026#39;AddressPrefix\u0026#39; = \u0026#39;10.0.0.0/24\u0026#39; } } VM Images (classic) #Create a VM image (classic) resource in MyResourceGroup in the UK South region. #Needs a valid VHD in a compatible storage account to work correctly New-AzureRmResource -ResourceName \u0026#39;MyVMImage\u0026#39; -ResourceGroupName $rgName -ResourceType \u0026#39;Microsoft.ClassicStorage/storageAccounts/vmImages\u0026#39; ` -Location $location ` -PropertyObject @{\u0026#39;Label\u0026#39; = \u0026#39;MyVMImage Label\u0026#39; \u0026#39;Description\u0026#39; = \u0026#39;MyVMImage Description\u0026#39; \u0026#39;OperatingSystemDisk\u0026#39; = @{\u0026#39;OsState\u0026#39; = \u0026#39;Specialized\u0026#39; \u0026#39;Caching\u0026#39; = \u0026#39;ReadOnly\u0026#39; \u0026#39;OperatingSytem\u0026#39; = \u0026#39;Windows\u0026#39; \u0026#39;VhdUri\u0026#39; = \u0026#39;https://mystorageaccount.blob.core.windows.net/mycontainer/myvhd.vhd\u0026#39;} } Conclusions or Wot I Think The requirement to work with the cmdlets shown in this post should only really be a concern for those who are maintaining \u0026ldquo;classic\u0026rdquo; resources as part of an ongoing deployment. It is therefore important to emphasise not to use these cmdlets to create resources for new projects. Alongside the additional complexity involved in constructing the New-AzureRmResource cmdlet, there is an abundance of new, updated AzureRM cmdlets at your disposal that enables you to more intuitively create the correct types of resources. The key benefit that these examples provide is the ability to use a single Azure PowerShell module for the management of your entire Azure estate, as opposed to having to switch back and forth between different modules. It is perhaps a testament to how flexible Azure is that cmdlets like the New-AzureRmResource exist in the first place, ultimately enabling anybody to fine-tune deployment and maintenance scripts to suit any conceivable situation.\n","date":"2018-07-08T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/creating-classic-services-using-azure-resource-manager-powershell-cmdlets/","title":"Creating \"Classic\" Services Using Azure Resource Manager PowerShell Cmdlets"},{"content":"The Voice of the Customer (VoC) add-on solution for Dynamics 365 Customer Engagement (D365CE) presents a really nice way of incorporating survey capabilities within your existing Dynamics application estate, without any additional cost or significant administrative overhead. I\u0026rsquo;ve talked about the tool previously, within the context of specific application errors, and I can attest to its capabilities - both as a standalone solution and as one that can be leveraged alongside other D365CE functionality to generate additional value.\nOne feature that is particularly useful is the ability to include diverse Survey Response controls. This can cover the range of anticipated user inputs that most web developers would be used to - text inputs, ratings, date pickers etc. - along with more marketing specific choices such as Net Promoter Score and even a Smilies rating control. The final one of these really does have to be seen to wholly appreciate:\nI hope you agree that this is definitely one of those features that becomes so fun that it soaks up WAY more time than necessary 🙂\nOne of the final options that VoC provides you is the ability to upload files to a Survey Response, which is stored within the application and made retrievable at any time by locating the appropriate Survey Response record. You can customise the guidance text presented to the user for this control, such as in the example below:\nUploaded files are then saved onto an Azure Blob Storage location (which you don\u0026rsquo;t have direct access to), with the access URL stored within D365CE. The inclusion of this feature does provide the capability to accommodate several potential business scenarios, such as:\nAllowing a service desk to create an automated survey that allows error logs or screenshots to be uploaded for further diagnosis. The gathering of useful photographic information as part of a pre-qualification process for a product installation. Enabling customers to upload a photo that provides additional context relating to their experience - either positive or negative. Putting all of this aside, however, and there are a few things that you should bear in mind when first evaluating this feature for your particular requirements. What follows is my list of major things to be aware of, along with some tips to sidestep any issues.\nPrivacy concerns\u0026hellip; To better understand why this is relevant, it helps to be aware of exactly how files can be stored on Azure. Azure file storage works on the principle of \u0026ldquo;blobs\u0026rdquo; (i.e. files), which can only be created within a corresponding Storage Container. These can be configured using a couple of different options, depending on how you would like to access your data, which is elaborated upon in this really helpful article:\nYou can configure a container with the following permissions:\nNo public read access: The container and its blobs can be accessed only by the storage account owner. This is the default for all new containers. Public read access for blobs only: Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container. Full public read access: All container and blob data can be read by anonymous request. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account. To presumably mitigate the need for complex deployments of the VoC solution, all uploaded Survey Response files are saved in Full public read access storage containers, meaning that anyone with the URL can access these files. And, as mentioned already, administrators have no direct access to the Azure Storage Account to modify these permissions, potentially compounding this access problem. Now, before you panic too much, the VoC solution deliberately structures the uploaded file in the following format:\nhttps://.blob.core.windows.net/-files/-\nThis degree of complexity added during this goes a long way towards satisfying any privacy concerns - it would be literally impossible for a human being or computer to guess what a particular uploaded file path is, even if they did have the Survey Response record GUID - but this still does not address the fact that the URL can be freely accessed and shared by anyone with sufficient permissions over the Survey Response entity in D365CE. You should, therefore, take appropriate care when scoping your security privileges within D365CE and look towards carrying out a Privacy Impact Assessment (PIA) over the type of data you are collecting via the upload file control.\n\u0026hellip;even after you delete a Survey Response. As mentioned above, the Blob Storage URL is tagged to the Survey Response record within D365CE. So what happens when you delete this record? The answer, courtesy of Microsoft via a support request:\nDeleting Survey Response should delete the file uploaded as part of the Survey Response\nBased on my testing, however, this does not look to be the case. My understanding of the VoC solution is that it needs to regularly synchronise with components located on Azure, which can lead to a delay in certain actions completing (publish a Survey, create Survey Response record etc.). However, a file from a Survey Response test record that I deleted still remains accessible via its URL up to 8 hours after completing this action. This, evidently, raises a concern over what level of control you have over potentially critical and sensitive data types that may be included in uploaded files. I would urge you to carry out your own analysis as part of a PIA to sufficiently gauge what impact, if any, this may have on your data collection (and, more critically, storage) activities.\nRestrictions For the most part, file upload controls are not a heavily constrained feature, but it is worthwhile to keep the following restrictions in mind:\nExecutable file types are not permitted for upload (.exe, .ps1, .bat etc.) Larger file types may not upload successfully, generating 404 server errors within the control. There is not a documented size limitation, but my testing would indicate that files as big as 60MB will not upload correctly. Only one file upload control is allowed per survey. The last of these limitations is perhaps the most significant constraint. If you do have a requirement for separate files to be uploaded, then the best option is to provide instructions on the survey, advising users to compress their files into a single .zip archive before upload.\nConclusions or Wot I Think Despite what this post may be leaning towards, I very much believe the VoC solution and, in particular, the ability to upload Survey Response files, is all in a perfect, working condition. Going a step further on this, when viewed from a technical standpoint, I would even say that its method of execution is wholly justified. With the advent of the General Data Protection Regulations (GDPR) earlier this year, current attention is all around ensuring that appropriate access controls over data have been properly implemented, that ensures the privacy of individuals is fully upheld. Here is where the solution begins to fall over to a degree and evidence of the journey that VoC has made in becoming part of the Dynamics 365 \u0026ldquo;family\u0026rdquo; becomes most apparent. As can be expected, any product which is derived from an external acquisition will always present challenges when being \u0026ldquo;smushed\u0026rdquo; with a new application system. I have been informed that there is an update coming to the VoC solution in August this year, with a range of new features that may address some of the data privacy concerns highlighted earlier. For example, the option will be provided for administrators to delete any uploaded file within a Survey Response on-demand. Changes like this will surely go a long way towards providing the appropriate opportunities for VoC to be fully utilised by businesses looking to integrate an effective, GDPR-proof, customer survey tool.\n","date":"2018-07-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/voice-of-the-customer-survey-response-file-uploads-a-few-thoughts/","title":"Voice of the Customer Survey Response File Uploads: A Few Thoughts"},{"content":"I was very honoured and excited to be involved with the very first D365UG/CRMUG North West Chapter Meeting earlier this week, hosted at the Grindsmith just off Deansgate in Manchester. This is the first time that a D365UG/CRMUG event has taken place in the North West, and we were absolutely stunned by the level of interest this event generated - all in all, 37 people attended, representing a broad spectrum of Microsoft partners and organisations of varying sizes.\nSetting up for the inaugural @CRMUG_UK_NW good turn out so far and looks to be an exciting night. #D365 pic.twitter.com/jZO9nbUgNs\n\u0026mdash; Matt Collins-Jones | MCJ (@D365Geek) June 20, 2018 Wooop!!! About to get going with the the first #d365uguk meeting in Manchester. What a turnout! @CRMUGUK @CRMUG_UK_NW #msdyn365 pic.twitter.com/KCjiqH8x5g\n\u0026mdash; Chris Huntingford 🚀 (@CNHuntingford) June 20, 2018 I very much got the impression that the amount of Dynamics 365 Customer Engagement (D365CE) users in the North West far exceed any number you could assume, and I am really looking forward to seeing how future events develop as we (hopefully!) get more people involved. Despite a few technical glitches with the AV facilities, the feedback we have received to both presentations has been overwhelmingly positive, so a huge thanks to everyone who turned up and to our presenters for the evening\nIn this post, I wanted to share my thoughts on both sets of presentations, provide an answer to some of the questions that we didn\u0026rsquo;t get around to due to time constraints and, finally, provide a link to the slide deck from the evening.\nTransform Group - The Patient Journey The first talk of the evening was provided courtesy of Bill Egan at Edgewater Fullscope, who took us through Transform Group\u0026rsquo;s adoption of D365CE. Bill provided some really useful insights - from both an organisation and a Microsoft partner\u0026rsquo;s perspective - of the challenges that any business can face when moving across to a system like D365CE. As with any IT project, there were some major hurdles along the way, but Bill very much demonstrated how the business was able to roll with the punches and the very optimistic 16 weeks planned deployment presents an, arguably, essential blueprint in how IT projects need to be executed; namely, targeted towards delivering as much business benefit in a near immediate timeframe.\nFully underway now with the first EVER North West #MSDYN365 @CRMUG_UK_NW meeting. Bill Egan is talking us through the patient journey at Transform Group. pic.twitter.com/CbZWtMULRw\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) June 20, 2018 This is packed out! Who hoo. #crmuguknw launch welcomes Bill Egan from @fullscopecrm to take us through ‘The Patient Journey’ #msdyn365 pic.twitter.com/myX34uIa52\n\u0026mdash; D365PPUG Manchester UK (@D365MCRUK) June 20, 2018 The key takeaways from me out of all this was in emphasising the importance of adapting projects quickly to changing business priorities and to recognise the continued effort required to ensure that business systems are regularly reviewed and updated to suit the requirements of not just the users, but the wider business.\nPower Up Your Power Apps The second presentation was literally a \u0026ldquo;head to head\u0026rdquo; challenge with Craig Bird from Microsoft and Chris \u0026ldquo;The Tattooed CRM Guy\u0026rdquo; Huntingford from Hitachi Solutions, seeing who could build the best PowerApps. In the end, the voting was pretty unanimous and Craig was the proud recipient of a prize worthy of a champion. I hope Craig will be wearing his belt proudly at future events 🙂\nCraig Bird our #PowerApps winner. Here\u0026#39;s to the next session on 03 October! #MSDyn365 #crmuguknw pic.twitter.com/GFS1L4525w\n\u0026mdash; D365PPUG Manchester UK (@D365MCRUK) June 20, 2018 I found the presentation particularly useful in clearing up a number of worries I had around the Common Data Service and the future of D365CE. The changes that I saw are very much emphasised towards providing a needed facelift to the current customisation and configuration experience within D365CE, with little requirement to factor in migration and extensive learning of new tools to ensure that your D365CE entities are available within the Common Data Service. Everything \u0026ldquo;just works\u0026rdquo; and syncs across flawlessly.\nUseful to know difference between canvas vs model driven #PowerApps, learn something new every day! pic.twitter.com/4hbADbzctL\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) June 20, 2018 #PowerBI in a #PowerApps canvas app - YES!!! 🤩🤩 pic.twitter.com/6IACDfzGXb\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) June 20, 2018 In terms of who had the best app, I think Craig literally knocked the socks off everyone with his translator application. Although I include myself in this category, I was still surprised to see that PowerApps supports Power BI embedded content, courtesy of Chris - a really nice thing to take away for any aspirational PowerApp developer.\nQuestions \u0026amp; Answers We managed to get around to most questions for the first presentation but not for the second one. Here\u0026rsquo;s a list of all the questions that I am able to provide an answer to. I\u0026rsquo;m still in the process of collating together responses to the other questions received, so please keep checking back if you\u0026rsquo;re burning question is not answered below:\nCan you create publisher prefixes in PowerApps? Yes, similar to how things currently work within D365CE, this is definable across both PowerApps and D365CE. Please refer to the following article to find out how to do this in PowerApps. How do you embed an app into a dashboard? Gustaf Westerlund has done a great post showing how to embed a PowerApps as part of a D365CE Web Resource. Using this method, it should then be a simple case of adding the newly created Web Resource to your Dashboard of choice. How much is a PowerApps license? Pricing depends on whether you are wanting to develop canvas or model-driven apps. For the former, Plan 1 will give you everything you need at £5.30 user/month. To take things further with model-driven apps, then Plan 2 will be your best choice at £30.20 user/month. Plan 2 is also included as part of select Dynamics 365 plans. Additional discounts may apply for charity/education organisations or if you are transacting your licenses through the Cloud Solutions Provider (CSP) programme. You can find out more about licensing on the PowerApps website. Presentation For those who missed the event or are wanting to view the slides without a purple tinge, they will be downloadable for the next 30 days from the following location:\nhttps://jamesgriffin-my.sharepoint.com/:p:/g/personal/joe_griffin_gb_net/EbRAws0urypMkrGyqCzoTdMB4ggjUQI4_npQlEZAYhea4w?e=U3lvf5\nLooking Ahead The next chapter meeting is scheduled to take place on the 2nd of October (venue TBC). If you are interested in getting involved, either through giving a presentation or in helping to organise the event, then please let us know by dropping us a message:\nEmail: crmuguknw@gmail.com Twitter: @CRMUG_UK_NW ","date":"2018-06-24T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/d365ug-crmug-manchester-nw-uk-chapter-meeting-20th-june-2018/","title":"D365UG/CRMUG Manchester (NW UK) Chapter Meeting - 20th June 2018"},{"content":"If you are heavily involved with the management and deployment of Office 365 Click-to-Run installation packages on a daily basis, the chances are that you have come across all sorts of weird and wonderful error messages throughout your travels. Although I am admittedly a novice in this area, I know there is oft the requirement to resort to registry modifications or similar to achieve certain kinds of management tasks, along with a whole list of other quirks that can test the patience of even the most ardent of IT professionals and frustrate what may appear to be simplistic Office 365 deployments.\nCase in point - the installation of the downloadable Click-to-Run Office 365 installation package (available from the Office 365 portal) can be complicated when installing it via the default Administrator account on a Windows machine. When attempting to do this, either by double-clicking the executable file or executing it using Run as administrator privileges, you may get the following error message displayed below:\nThe error can occur in any version of Windows, up to and including Windows 10 1803 Build. The issue appears to pertain to the default Administrator account that is created on the operating system and can be observed occurring when creating a new Windows 10 Virtual Machine on Azure. It\u0026rsquo;s possible that the error may also occur in desktop versions of the operating system, depending on how the initial local administrator user account is deployed. There are a couple of ways to resolve this error, depending on your preferred method, familiarity with working with the inner engines of Windows and your inclination towards either a quick or \u0026ldquo;proper\u0026rdquo; solution.\nThe quick and easy solution This route involves the creation of a new user account on the machine, which can then be logged into and used to install the application via User Account Control elevated privileges. The steps to achieve this are as follows:\nType Windows + R on the start menu to open the Run box. Enter lusrmgr.msc in the Run box and hit enter. This will open the Local Users and Groups Microsoft Management Console (MMC) snap-in. Right-click on the Users folder and select New User\u0026hellip; Create a new user account with the required details and password. Ensure that the account is not disabled via the Account is disabled button and click Create to\u0026hellip;well\u0026hellip;create the account. 🙂 Log out of Windows and log back in as the new user. When attempting to run the installation package again, you should be (correctly) prompted to enter Administrator credentials via the UAC control dialog and the installation should start successfully. The proper solution Although the above steps are perfectly acceptable and straightforward to follow if you are in a rush, they do not address the underlying problem with the default Administrator account - namely, that it will have issues installing any application that requires elevated privileges. In most cases, where an application requires installation onto a machine, it is generally better to login as the Administrator user account as opposed to relying solely on the UAC elevation prompt. As a consequence, the most ideal solution to save you from any future hassle is to fix the issue with the default Administrator account permanently.\nFollowing some research online and testing on my side, I found this article which goes through the steps that will successfully address the problem and enable you to install Office 365 - and any other program - without issue. In my specific example, I had to follow the instructions listed in Method 2 and 3, followed by a restart of the machine in question, before I was able to install Office 365 successfully. Although the steps involved are a little more complex and error-prone, the article does provide clear instructions, along with screenshots, to help you along the way.\nConclusions or Wot I Think I recently attended a Microsoft Partner training day covering Microsoft 365 Business and Enterprise and the topic of Office 365 came up regularly, as you can imagine. The deployment settings afforded to you via Microsoft 365 let you perform automated actions in a pinch, with perhaps the most common one being the removal of any local administrator privileges when a new machine is deployed using your organisation\u0026rsquo;s chosen template. As our instructor pointed out, this is incompatible with how Office 365 installations operate; because, as we have seen, full administrative privileges are required to install the software. We, therefore, find ourselves in this strange state of affairs where the Microsoft 365 solution as a whole is in a glass half full (or half empty, if you are feeling pessimistic) situation and, more generally, Office 365 deployments are hampered due to requiring local or domain level Administrator privileges. I would hope that, eventually, Microsoft would look to providing an installation package of Office that does not require such extensive permissions to install. Doing this would kill two birds with one stone - it would help to make the deployment of all the components of Microsoft 365 a breeze whilst also avoiding the error message discussed in this post. Here\u0026rsquo;s hoping that we see this change in the future to avoid interesting little side-journeys like this when deploying Office 365.\n","date":"2018-06-17T00:00:00Z","image":"/images/Microsoft365-FI.png","permalink":"/administrative-privileges-required-when-installing-office-365-click-to-run-as-administrator/","title":"'Administrative Privileges Required' When Installing Office 365 Click-To-Run as Administrator"},{"content":"The Voice of the Customer (VoC) solution, available as part of Dynamics 365 Customer Engagement (D365CE), works most effectively when you are tightly integrating your survey\u0026rsquo;s around other features or datasets stored within the application. That\u0026rsquo;s not to say that it must only ever be utilised in tandem as opposed to isolation. If you have the requirement to quickly send out a survey to a small list of individuals (regardless of whether they are D365CE Contact records), VoC presents a natural choice if you are already paying for D365CE Online, as it is included as part of your subscription cost. As services such as SurveyMonkey tend to charge money to let you develop more complex, bespoke branded surveys, VoC, by comparison, offers all of this to you at no additional cost. Just don\u0026rsquo;t be buying licenses to use only this specific piece of functionality. 🙂 Ultimately, the upside of all this is that VoC represents a solid solution in and of itself, but working with the solution in this manner is just the icing on top of the cake. When you start to take into account the myriad of different integration touchpoints that VoC can instantly support, thanks to its resident status within D365CE, this is where things start to get really exciting. With this in mind, you can look to implement solutions that:\nSend out surveys automatically via e-mail. Route survey responses to specific users, based on answers to certain questions or the Net Promoter Score (NPS) of the response. Tailor a specific email to a customer that is sent out after a survey is completed. Include survey data as part of an Azure Data Export Profile and leverage the full power of T-SQL querying to get the most out of your survey data. It is in the first of these scenarios - sending out surveys via a WorkFlow - that you may find yourself encountering the error referenced in the title of this post. The error can occur when you look to take advantage of the survey snippet feature as part of an Email Template - in laymen\u0026rsquo;s terms, the ability to send out a survey automatically and tag the response back to the record that it is triggered from. To implement this, you would look towards creating a WorkFlow that looks similar to the below:\nWith then either the desired email message typed out or a template specified that contains the survey snippet code, available from a published survey\u0026rsquo;s form:\nAll of this should work fine and dandy up until the user in question attempts to trigger the workflow; at which point, we see the appropriate error message returned when viewing the workflow execution in the System Jobs area of the application:\nThose who are familiar with errors like these in D365CE will instantly realise that this is a security role permission issue. In the above example, the user in question had not been granted the Survey User role, which is included as part of the solution and gives you the basic \u0026ldquo;set menu\u0026rdquo; of permissions required to work with VoC. A short while following on from rectifying this, we tried executing the workflow again and the error still occurred, much to our frustration. Our next step was to start combing through the list of custom entity privileges on the Security Role tab to see if there was a permission called Azure Deployment or similar. Much to our delight, we came across the following permission which looked like a good candidate for further scrutiny:\nWhen viewing this security role within the Survey User role, the Read permission for this organization-owned custom entity was not set. By rectifying this and attempting to run the workflow again, we saw that it executed successfully 🙂\nIt seems a little odd that the standard user security role for the VoC module is missing this privilege for an arguably key piece of functionality. In our situation, we simply updated the Survey User security role to include this permission and cascaded this change accordingly across the various dev/test/prod environments for this deployment. You may also choose to add this privilege to a custom security role instead, thereby ensuring that it is properly transported during any solution updates. Regardless, with this issue resolved, a really nice piece of VoC functionality can be utilised to streamline the process of distributing surveys out to D365CE customer records.\n","date":"2018-06-10T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/principal-user-is-missing-prvreadmsdyn_azuredeployment-privilege-error-in-voice-of-the-customer/","title":"'Principal user is missing prvReadmsdyn_azuredeployment privilege' Error in Voice of the Customer"},{"content":"Repeatable and time-consuming tasks are typically an excellent candidate for automation. The range of business benefits that can be realised is perhaps too broad to list, but I think that the simple ability to free up an individuals time to accomplish something better represents the ideal end goal of such activity. I have generally found that the best kind of automation is when there is a degree of human involvement on a very minimal basis - what I would term \u0026ldquo;keeping the brain involved\u0026rdquo; and not blindly assuming that the computer will always make the correct choice. A lot of the tools afforded to us when working with Microsoft cloud technologies appear to be very firmly rooted within this mindset, with frameworks such as PowerShell providing the means of carrying out sequence of tasks far quicker than a human could achieve, whilst also providing the mechanism to facilitate human involvement at key steps during any code execution cycle.\nWhen creating a Web App via the Azure portal, you have the option of specifying the creation of an Application Insights resource, which will be automatically associated with your newly created Web App during the deployment. In most cases, you are going to want to take advantage of what this service can deliver to your application in terms of monitoring, usage patterns and error detection; the fact that I am such a major proponent of Application Insights should come as no surprise to regular readers of the blog. Should you find yourself having to deploy both of these resources in tandem via PowerShell, your first destination will likely be the New-AzureRmAppServicePlan \u0026amp; New-AzureRmWebApp cmdlets. For example, the following scripts when executed will create a Basic App Service Plan and Web App in the UK South region called MyWebsite, contained within a resource group with the same name:\nNew-AzureRMResourceGroup -Name \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;UK South\u0026#39; New-AzureRmAppServicePlan -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;UK South\u0026#39; -Tier \u0026#39;Basic\u0026#39; New-AzureRmWebApp -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;Basic\u0026#39; -AppServicePlan \u0026#39;MyWebsite\u0026#39; Next involves the creation of the Application Insights resource, which you would be forgiven for thinking could be created as part of one of the cmdlets above (à la the portal). Instead, we must resort to a generic cmdlet that can be tinkered with to create any resource on the Azure platform, per the instructions outlined in this article. Therefore, the following cmdlet needs to be executed next to create an Application Insights resource using identical parameters defined for the App Service Plan/Web App:\n$appInsights = New-AzureRmResource -ResourceName \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; ` -Tag @{ applicationType = \u0026#39;web\u0026#39;; applicationName = \u0026#39;MyWebsite\u0026#39;} ` -ResourceType \u0026#39;Microsoft.Insights/components\u0026#39; -Location \u0026#39;UK South\u0026#39; ` -PropertyObject @{\u0026#39;Application_Type\u0026#39;=\u0026#39;web\u0026#39;} -Force It\u0026rsquo;s worth pointing out at this stage that you may get an error returned along the lines of No registered resource provider found for location\u0026hellip; when executing the New-AzureRmResource cmdlet. This is because not all resource providers are automatically registered for use via PowerShell on the Azure platform. This can be resolved by executing the below cmdlets to create the appropriate registration on your subscription. This can take a few minutes to update on the platform:\n#Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered. #If not, execute Register-AzureRmResourceProvider to get it added. #Then, keep running the first cmdlet until the registration is confirmed. Get-AzureRmResourceProvider | Where ProviderNamespace -eq \u0026#39;microsoft.insights\u0026#39; Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights We now have a Web App and Application Insights resource deployed onto Azure. But, at this juncture, the Web App and Application Insight resources exist in isolation, with no link between them. To fix this, the final step involves updating the newly created Web App resource with the Application Insights Instrumentation Key, which is generated once the resource is created. Because the above snippet is storing all details of the newly created resource within the $appInsights parameter, we can very straightforwardly access this property and add a new application setting via the following cmdlets:\n$appSetting = @{\u0026#39;APPINSIGHTS_INSTRUMENTATIONKEY\u0026#39;= $appInsights.Properties.InstrumentationKey} Set-AzureRmWebApp -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -AppSettings $appSetting With this final step accomplished, the resources are now associated together and this should be reflected accordingly when viewed in the portal. For completeness, the entire script to achieve the above (also including the necessary login steps) can be seen below:\nSet-ExecutionPolicy Unrestricted Login-AzureRmAccount New-AzureRMResourceGroup -Name \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;UK South\u0026#39; New-AzureRmAppServicePlan -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;UK South\u0026#39; -Tier \u0026#39;Basic\u0026#39; New-AzureRmWebApp -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -Location \u0026#39;Basic\u0026#39; -AppServicePlan \u0026#39;MyWebsite\u0026#39; #Check to see if the Microsoft.Insights provider has a RegistrationState value of Registered. #If not, execute Register-AzureRmResourceProvider to get it added. #Then, keep running the first cmdlet until the registration is confirmed. Get-AzureRmResourceProvider | Where ProviderNamespace -eq \u0026#39;microsoft.insights\u0026#39; Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights $appInsights = New-AzureRmResource -ResourceName \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; ` -Tag @{ applicationType = \u0026#39;web\u0026#39;; applicationName = \u0026#39;MyWebsite\u0026#39;} ` -ResourceType \u0026#39;Microsoft.Insights/components\u0026#39; -Location \u0026#39;UK South\u0026#39; ` -PropertyObject @{\u0026#39;Application_Type\u0026#39;=\u0026#39;web\u0026#39;} -Force $appSetting = @{\u0026#39;APPINSIGHTS_INSTRUMENTATIONKEY\u0026#39;= $appInsights.Properties.InstrumentationKey} Set-AzureRmWebApp -Name \u0026#39;MyWebsite\u0026#39; -ResourceGroupName \u0026#39;MyWebsite\u0026#39; -AppSettings $appSetting The above example is interesting in the sense that Application Insights does not have a set of dedicated cmdlets for creating, retrieving and updating the resource. Instead, we must rely on fairly generic cmdlets - and their associated complexity - to work with this resource type. It also seems somewhat counter-intuitive that there is no option as part of the New-AzureRmWebApp cmdlet to create an Application Insights resource alongside the Web App, as we have established the ability to carry this out via the Azure portal. Being able to specify this as an additional parameter (that would also perform the required steps involving the Instrumentation Key) would help to greatly simplify what must be a fairly common deployment scenario. As a service that receives regular updates, we can hope that Microsoft eventually supports one or both of these scenarios to ensure that any complexity towards deploying Application Insights resources in an automated release is greatly reduced.\n","date":"2018-06-03T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/automating-the-deployment-of-an-azure-web-app-and-application-insights-resource-powershell/","title":"Automating the Deployment of an Azure Web App and Application Insights Resource (PowerShell)"},{"content":"PowerApps is very much the in vogue topic at the moment, particularly if you are a Dynamics 365 Customer Engagement professional reconciling yourself with the new state of affairs. The previous sentence may sound negative but is very much contrary to my opinion on PowerApps, which I am finding increased use cases for each day when working to address certain business requirements. Whether you are looking to implement a barcode scanning application or something much more expansive, the set of tools that PowerApps provides from the outset means that traditional developers can very easily achieve solutions that would previously take Visual Studio and a whole breadth of programming knowledge to realise.\nOn the topic of developers, one thing that they may have to assuage themselves with when working with PowerApps is the inability to display dialog messages within the app when some kind of alert needs to be provided to the user. A typical scenario for this could be to request that the user completes all fields on a form before moving to the next screen, ensuring that an appropriate message is displayed reflecting this fact. Whilst there is no current way of achieving this via a dedicated pop-up control or similar, there are ways that this behaviour can be imitated using existing Label controls and a bit of function wizardry.\nThe best way to illustrating how to accomplish this is to view an example PowerApps app. Below are screenshots of a very simple two-screen app, with several Text Inputs, a Button and Label control:\nThe required functionality of the app is to allow navigation to the \u0026lsquo;Thanks for your submission!\u0026rsquo; screen only if the user has entered data into all of the Text Input controls on the first screen; if this condition is not met, then the user should be prevented from moving to the next screen and an error message should be displayed to advise the user accordingly.\nThe first step is to create the error message and required text on the first screen. This can be straightforwardly achieved via an additional Label control, with some text formatting and colour changes to make it noticeable to the user.\nAs with many other controls within PowerApps, you have the ability to toggle the visibility of Labels either on a consistent or variable basis. The Visible property is your go-to destination for this and, as the next step, the value of this field should be updated to read ErrorVisible - the name of a variable storing the state of the controls visibility (either true or false). If done correctly, as indicated in the screenshot below, you will notice that the Label control will immediately disappear from the screen on the right. This is because the default value of the newly specified variable is false.\nThe next step involves the invocation of a somewhat complex PowerApps function to implement the required logic on the Button control. The entire function to use is reproduced below in its entirety:\nIf( Or( IsBlank(TextInput1.Text), IsBlank(TextInput2.Text), IsBlank(TextInput3.Text), IsBlank(TextInput4.Text) ), Set(ErrorVisible, true), Navigate(Screen2,ScreenTransition.Fade) ) To break the above down for those who are unfamiliar with working with functions:\nIsBlank does exactly what it says on the tin, but it\u0026rsquo;s important to emphasise that to check whether a field contains a value or not, you have to specify the Text property of the control. The Set function enables us to specify the values of variables on the fly, whether they have been declared already or not somewhere else on the app. No additional syntax is required, making it very straightforward to create runtime variables that store values throughout an entire app session. Navigate specifies any other screen on the app to open. We can also select a transition effect to use which, in this case, is the Fade transition. Finally, all of the above is wrapped around an If logic statement that prevents the user from moving to the next screen if any of the Text Input controls do not contain a value (courtesy of the Or statement). The function needs to be entered within the OnSelect property of the button control, and your form should resemble the below if done correctly:\nWith everything configured, its time to give the app a test drive. 🙂 The sequence below provides a demonstration of how the app should work if all of the above steps are followed:\nA final, potentially optional step (depending on what your app is doing), is to ensure that the error message is hidden as soon as the user navigates away from the 1st screen. This can be achieved by updating the ErrorVisible variable back to false as soon as the user navigates onto the second screen, as indicated in the screenshot below:\nConclusions or Wot I Think\nPowerApps is still very much a product in its infancy, very neatly fitting into the new wave of Microsoft products with regular release cycles, feature updates and ongoing development. It can be, therefore, unrealistic to expect a full range of features which satisfy all business scenarios to be available at this juncture. Having said that, one feature that could be added to greatly benefit data entry into forms is the ability to display pop-out dialog messages, depending on field requirement levels or other conditional logic. The key benefit of this would be the need not to resort to complex functions to achieve this functionality and to, instead, allow error messages or alerts to be configured via the PowerApps GUI. A similar comparison can perhaps be made with Business Rules in Dynamics 365 Customer Engagement. Before their introduction, developers would have to resort to JScript functions to display form-level alerts based on conditional logic. Not everyone is familiar with JScript, meaning a significant barrier was in place for those looking to implement arguably straightforward business logic. Now, with Business Rules, we have the ability to replicate a lot of functionality that JScript allows for, speeding up the time it takes to implement solutions and providing a much clearer mechanism of implementing straightforward business logic. Hopefully, in the months ahead, we can start to see a similar type of feature introduced within PowerApps to aid in developing a similar solution demonstrated in this post.\n","date":"2018-05-27T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/display-data-input-error-messages-within-powerapps/","title":"Display Data Input Error Messages within PowerApps"},{"content":"When considering whether or not to shift your existing SQL workloads to a single database offering on Azure SQL, one of the major pros is the breadth of capabilities the service can offer when compared with other vendors or in comparison to SQL Server on an Azure Virtual Machine. A list of these may include:\nHigh feature parity with the latest on-premise SQL Server offering. Built-in support for Enterprise product features, such as Transparent Database Encryption. Security management features, such as firewalls and (optional) integration with Azure SQL Database Threat Detection for proactive monitoring. Ability to quickly scale a database from a 2GB database with low CPU consumption to a mammoth 4TB database, with a significant pool of CPU/memory resources to match. It is the last one of these that makes Azure SQL database a particularly good fit for web application deployments that have unpredictable user loads at the time of deployment or, as we have seen previously on the blog, when you are wanting to deploy out a LOB reporting database that houses Dynamics 365 Customer Engagement instance data. Administrators can very straightforwardly scale or downscale a database at any time within the portal or, if you are feeling particularly clever, you can look to implement automatic scaling based on Database Throughput Unit (DTU) consumption. This can aid towards making your query execution times as speedy as possible.\nDatabase scaling, I have found, is very straightforward to get your head around and works like a charm for the most part\u0026hellip;except, of course, when you get rather cryptic error messages like the one demonstrated below:\nI got this error recently when attempting to scale an S0 5GB database down to Basic 2GB tier. To cut a long story short, I had temporarily scaled up the database to give me increased DTU capacity for a particularly intensive query, and wanted to scale it back to its original pricing tier. You can perhaps understand my confusion about why this error was occurring. After further research and escalation to Microsoft, it turns out that the database was still consuming unused disk space on the platform, thereby violating any size limits imposed by moving to a lower price tier. To resolve the issue, there are some tasks that need to be performed on the database to get it into a \u0026ldquo;downscale-ready state\u0026rdquo;. These consist of a series of T-SQL scripts, which I would caution against using if the database is currently in use, due to potential performance impacts. If you have found yourself in the same boat as me and are happy to proceed, the steps involved are as follows:\nTo begin with, the script below will execute the DBCC SHRINKDATABASE command against the database, setting the database file max size to the value specified on the @DesiredFileSize parameter. The script is compiled so as to perform the shrinking in \u0026ldquo;chunks\u0026rdquo; based on the value of the @ShrinkChunkSize parameter, which may be useful in managing DTU consumption: SET NOCOUNT ON DECLARE @CurrentFileSize INT, @DesiredFileSize INT, @ShrinkChunkSize INT, @ActualSizeMB INT, @ErrorIndication INT, @dbFileID INT = 1, @LastSize INT, @SqlCMD NVARCHAR(MAX), @msg NVARCHAR(100) /*Set these values for the current operation, size is in MB*/ SET @DesiredFileSize = 2000 /* filesize is in MB */ SET @ShrinkChunkSize = 50 /* chunk size is in MB */ SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid = @dbFileID SELECT @ActualSizeMB = (SUM(total_pages) / 128) FROM sys.allocation_units SET @msg = \u0026#39;Current File Size: \u0026#39; + CAST(@CurrentFileSize AS VARCHAR(10)) + \u0026#39;MB\u0026#39; RAISERROR(@msg,0,0) WITH NOWAIT SET @msg = \u0026#39;Actual used Size: \u0026#39; + CAST(@ActualSizeMB AS VARCHAR(10)) + \u0026#39;MB\u0026#39; RAISERROR(@msg,0,0) WITH NOWAIT SET @msg = \u0026#39;Desired File Size: \u0026#39; + CAST(@DesiredFileSize AS VARCHAR(10)) + \u0026#39;MB\u0026#39; RAISERROR(@msg,0,0) WITH NOWAIT SET @msg = \u0026#39;Interation shrink size: \u0026#39; + CAST(@ShrinkChunkSize AS VARCHAR(10)) + \u0026#39;MB\u0026#39; RAISERROR(@msg,0,0) WITH NOWAIT SET @ErrorIndication = CASE WHEN @DesiredFileSize \u0026gt; @CurrentFileSize THEN 1 WHEN @ActualSizeMB \u0026gt; @DesiredFileSize THEN 2 ELSE 0 END IF @ErrorIndication = 1 RAISERROR(\u0026#39;[Error] Desired size bigger than current size\u0026#39;,0,0) WITH NOWAIT IF @ErrorIndication = 2 RAISERROR(\u0026#39;[Error] Actual size is bigger then desired size\u0026#39;,0,0) WITH NOWAIT IF @ErrorIndication = 0 RAISERROR(\u0026#39;Desired Size check - OK\u0026#39;,0,0) WITH NOWAIT SET @LastSize = @CurrentFileSize + 1 WHILE @CurrentFileSize \u0026gt; @DesiredFileSize /*check if we got the desired size*/ AND @LastSize\u0026gt;@CurrentFileSize /* check if there is progress*/ AND @ErrorIndication=0 BEGIN SET @msg = CAST(GETDATE() AS VARCHAR(100)) + \u0026#39; - Iteration starting\u0026#39; RAISERROR(@msg,0,0) WITH NOWAIT SELECT @LastSize = size/128 FROM sysfiles WHERE fileid = @dbFileID SET @sqlCMD = \u0026#39;DBCC SHRINKFILE(\u0026#39;+ CAST(@dbFileID AS VARCHAR(7)) + \u0026#39;,\u0026#39; + CAST(@CurrentFileSize-@ShrinkChunkSize AS VARCHAR(7)) + \u0026#39;) WITH NO_INFOMSGS;\u0026#39; EXEC (@sqlCMD) SELECT @CurrentFileSize = size/128 FROM sysfiles WHERE fileid =@dbFileID SET @msg = CAST(getdate() AS VARCHAR(100)) + \u0026#39; - Iteration completed. current size is: \u0026#39; + CAST(@CurrentFileSize AS VARCHAR(10)) RAISERROR(@msg,0,0) WITH NOWAIT END PRINT \u0026#39;Done\u0026#39; With the database successfully shrunk, verify that the size of the database does not exceed your target @DesiredFileSize value by running the following query: SELECT * FROM sys.database_files SELECT (SUM(reserved_page_count) * 8192) / 1024 / 1024 AS DbSizeInMB FROM sys.dm_db_partition_stats Although by this stage, the database file sizes should be underneath 2GB, the maximum size of the database is still set to match the pricing tier level. To fix this, execute the following script, substituting the name of your database where appropriate: ALTER DATABASE MyDatabase MODIFY (MAXSIZE=2GB) You can confirm that this command has been executed successfully by then running the following query and reviewing the output:\nSELECT CAST(DATABASEPROPERTYEX (\u0026#39;MyDatabase\u0026#39;, \u0026#39;MaxSizeInBytes\u0026#39;) AS FLOAT)/1024.00/1024.00/1024.00 AS \u0026#39;DB Size in GB\u0026#39; With the above commands executed, you are now in a position to scale down your database without issue. There are a few ways this can be done but, as you likely already have SQL Server Management Studio or similar open to run the above queries, you can modify the tier of your database via this handy script: --Scaling down to Basic is easy, as there is only one Max Size/Service Level Objective --Therefore, just specify Edition ALTER DATABASE MyDatabase MODIFY (EDITION = \u0026#39;Basic\u0026#39;); --For other tiers, specify the size of the DB. --In this example, we are scaling down from Premium P1 1TB to Standard S2 250GB tier ALTER DATABASE MyDatabase MODIFY (EDITION = \u0026#39;Standard\u0026#39;, MAXSIZE = 250 GB, SERVICE_OBJECTIVE = \u0026#39;S2\u0026#39;); Although the script will likely execute immediately and indicate as such in any output, the actual scaling operation on the backend Azure platform can take some time to complete - usually about 5-10 minutes for lower sized databases.\nWhilst I was relieved that a workaround was available to get the database scaled down correctly, it would have been useful if the above error message was signposted better or if there was some kind of online support article that detailed that this could be a potential issue when moving a database between various pricing/sizing tiers. Hopefully, by sharing the above steps, others who in the same boat can very quickly diagnose and resolve the issue without hammering your credit card with increased database usage charges in the process. 🙂\n","date":"2018-05-20T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/an-unexpected-error-occured-while-processing-the-request-error-when-downscaling-an-azure-sql-database/","title":"'An unexpected error occured while processing the request' Error when Downscaling an Azure SQL Database"},{"content":"Back only a few years ago, when events such as a reality TV star becoming President of the USA were the stuff of fantasy fiction, Microsoft had a somewhat niche Customer Relationship Management system called Dynamics CRM. Indeed, the very name of this blog still attests to this fact. The \u0026ldquo;CRM\u0026rdquo; acronym provides a succinct mechanism for describing the purposes of the system without necessarily confusing people straight out of the gate. For the majority of its lifespan, \u0026ldquo;CRM\u0026rdquo; very accurately summarised what the core system was about - namely, a sales and case management system to help manage customer relations.\nAlong the way, Microsoft acquired a number of organisations and products that offered functionality that, when bolted onto Dynamics CRM, greatly enhanced the application as a whole. I have discussed a number of these previously on the blog, so I don\u0026rsquo;t propose to retread old ground. However, some notable exceptions from this original list include:\nVoice of the Customer: Slightly different from other acquisitions in the sense that it was a product, not a company, acquisition, the core functionality behind VoC was purchased from a company called Fusion Software Limited, as announced by Microsoft in late 2015. As mentioned on Fusion Software\u0026rsquo;s announcement post (courtesy of the WayBack Machine), the acquisition included the core Mojo Surveys property, which I believe was an existing ISV product for CRM at the time. Fantasy Sales Team AKA Dynamics 365 - Gamification: Later on in 2015, Microsoft announced the acquisition of Incent Games and their Fantasy Sales Team app for Dynamics CRM. The app, which is modelled off fantasy sports teams, adds a flavour of fun to the application with features such as incentives and leaderboards to drive sales team performance. Suffice to say, by the start of 2016, the range of functionality available within Dynamics CRM was growing each month and - perhaps more crucially - there was no clear mechanism in place from a billing perspective for each new solution offered. Instead, if you had a Dynamics CRM Professional license, guess what? All of the above and more was available to you at no additional cost.\nTaking this and other factors into account, the announcement in mid-2016 of the transition towards Microsoft Dynamics 365 can be seen as a welcome recognition of the new state of play and the ushering in of Dynamics CRM out of the cold to stand proud amongst the other, major Microsoft product offerings. Here\u0026rsquo;s a link to the original announcement:\nhttps://cloudblogs.microsoft.com/dynamics365/2016/07/06/insights-from-the-engineering-leaders-behind-microsoft-dynamics-365-and-microsoft-appsource/\nThe thinking behind the move was completely understandable. Dynamics CRM could no longer be accurately termed as such, as the core application was almost unrecognisable from its 2011 version. Since then, there has been a plethora of additional announcements and changes to how Dynamics 365 in the context of CRM is referred to in the wider offering. The road has been\u0026hellip;rocky, to the say the least. Whilst this can be reasonably expected with such a seismic shift, it nevertheless does present some challenges when talking about the application to end users and customers. To emphasise this fact, let\u0026rsquo;s take a look at some of the \u0026ldquo;bumps\u0026rdquo; in the road and my thoughts on why this is still an ongoing concern.\nDynamics 365 for Enterprise and Business The above announcement did not go into greater detail about how the specific Dynamics 365 offerings would be tailored. One of the advantages of the other offerings within the Office 365 range of products is the separation of business and enterprise plans. These typically allow for a reduced total cost of ownership for organisations under a particular size within an Office 365 plan, typically with an Enterprise version of the same plan available with (almost) complete feature parity, but with no seat limits. With this in mind, it makes sense that the initial detail in late 2016 confirmed the introduction of business and enterprise Dynamics 365 plans. As part of this, the CRM aspect of the offering would have sat firmly within the Enterprise plan, with - you guessed it - Enterprise pricing to match. The following article from encore at the time provides a nice breakdown of each offering and the envisioned target audience for each. Thus, we saw the introduction of Dynamics 365 for Enterprise as a replacement term for Dynamics CRM.\nPerhaps understandably, when many businesses - typically used to paying £40-£50 per seat for their equivalent Dynamics CRM licenses - discovered that they would have to move to Enterprise plans and pricing significantly in excess of what they were paying, there were some heads turned. Microsoft Partners also raised a number of concerns with the strategy, which is why it was announced that the Business edition and Enterprise edition labels were being dropped. Microsoft stated that they would:\n\u0026hellip;focus on enabling any organization to choose from different price points for each line of business application, based on the level of capabilities and capacity they need to meet their specific needs. For example, in Spring 2018, Dynamics 365 for Sales will offer additional price point(s) with different level(s) of functionality.\nThe expressed desire to enable organisations to \u0026ldquo;choose\u0026rdquo; what they want goes back to what I mentioned at the start of this post - providing a billing/pricing mechanism that would support the modular nature of the late Dynamics CRM product. Its introduction as a concept at this stage comes a little late in the whole conversation regarding Dynamics 365 and represents an important turning point in defining the vision for the product. Whether this took feedback from partners/customers or an internal realisation to bring this about, I\u0026rsquo;m not sure. But it\u0026rsquo;s arrival represents the maturity in thinking concerning the wider Dynamics 365 offering.\nDynamics 365 for Customer Engagement Following the retirement of the Business/Enterprise monikers and, in a clear attempt to simplify and highlight the CRM aspect of Dynamics 365, the term Customer Engagement started to pop-up across various online support and informational posts. I cannot seem to locate a specific announcement concerning the introduction of this wording, but its genesis appears to be early or mid-2017. The problem is that I don\u0026rsquo;t think there is 100% certainty yet on how the exact phrasing of this terminology should be used.\nThe best way to emphasise the inconsistency is to look to some examples. The first is derived from the name of several courses currently available on the Dynamics Learning Portal, published this year:\nNow, take a look at the title and for the following Microsoft Docs article on impending changes to the application:\nNotice it yet? There seems to be some confusion about whether for should be used when referring to the Customer Engagement product. Whilst the majority of articles I have found online seem to suggest that Dynamics 365 Customer Engagement is the correct option, again, I cannot point to a definitive source that states without question the correct phraseology that should be used. Regardless, we can see here the birth of the Customer Engagement naming convention, which I would argue is a positive step forward.\nThe Present Day: Customer Engagement and it\u0026rsquo;s 1:N Relationships Rather handily, Customer Engagement takes us straight through to the present. Today, when you go to the pricing website for Dynamics 365, the following handy chart is presented that helps to simplify all of the various options available across the Dynamics 365 range:\nThis also indirectly confirms a few things for us:\nMicrosoft Dynamics 365 Customer Engagement and not Microsoft Dynamics 365 for Customer Engagement looks to be the approved terminology when referring to what was previously Dynamics CRM. Microsoft Dynamics 365 is the overarching name for all modules that form a part of the overall offering. It has, by the looks of things, replaced the original Dynamics 365 for Enterprise designation. The business offering - now known as Dynamics 365 Business Central - is in effect a completely separate offering from Microsoft Dynamics 365. When rolled together, all of this goes a long way towards providing the guidance needed to correctly refer to the whole, or constituent parts, of the entire Dynamics 365 offering.\nWith that being said, can it be reliably said that the naming \u0026ldquo;crisis\u0026rdquo; has ended? My key concern through all of this is a confusing and conflicting message being communicated to customers interested in adopting the system, to the potential end result of driving them away to competitor systems. This appears to have been the case for about 1-2 years since the original Dynamics 365 announcement, and a large part of this can perhaps be explained by the insane acquisition drive in 2015/6. Now, with everything appearing to slot together nicely and the pricing platform in place to support each Dynamics 365 \u0026ldquo;module\u0026rdquo; and the overall offering, I would hope that any further change in this area is minimal. As highlighted above though, there is still some confusion about the correct replacement terminology for Dynamics CRM - is it Dynamics 365 Customer Engagement or Dynamics 365 for Customer Engagement? Answers on a postcode, please!\nAnother factor to consider amongst all of this is that naming will constantly be an issue should Microsoft go through another cycle of acquisitions focused towards enhancing the Dynamics 365 offering. Microsoft\u0026rsquo;s recent acquisition plays appear to be more focused towards providing cost optimization services for Azure and other cloud-based tools, so it can be argued that a period of calm can be expected when it comes to incorporating acquired ISV solutions into the Dynamics 365 product range.\nI\u0026rsquo;d be interested to hear from others on this subject, so please leave a comment below if you have your own thoughts on the interesting journey from Dynamics CRM to Dynamics 365 Customer Engagement\n","date":"2018-05-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamically-changing-names-understanding-dynamics-365-terminology/","title":"Dynamically Changing Names: Understanding Dynamics 365 Terminology"},{"content":"After going through a few separate development cycles involving Dynamics 365 Customer Engagement (D365CE), you begin to get a good grasp of the type of tasks that need to be followed through each time. Most of these are what you may expect - such as importing an unmanaged/managed solution into a production environment - but others can differ depending on the type of deployment. What ultimately emerges as part of this is the understanding that there are certain configuration settings and records that are not included as part of a Solution file and which must be migrated across to different environments in an alternate manner.\nThe application has many record types that fit under this category, such as Product or Product Price List. When it comes to migrating these record types into a Production environment, those out there who are strictly familiar with working inside the application only may choose to utilise the Advanced Find facility in the following manner:\nGenerate a query to return all of the records that require migration, ensuring all required fields are returned. Export out the records into an Excel Spreadsheet Import the above spreadsheet into your target environment via the Data Import wizard. And there would be nothing wrong with doing things this way, particularly if your skillset sits more within a functional, as opposed to technical, standpoint. Where you may come unstuck with this approach is if you have a requirement to migrate Subject record types across environments. Whilst a sensible (albeit time-consuming) approach to this requirement could be to simply create them from scratch in your target environment, you may fall foul of this method if you are utilising Workflows or Business Rules that reference Subject values. When this occurs, the application looks for the underlying Globally Unique Identifier (GUID) of the Subject record, as opposed to the Display Name. If a record with this exact GUID value does not exist within your target environment, then your processes will error and fail to activate. Taking this into account, should you then choose to follow the sequence of tasks above involving Advanced Find, your immediate stumbling block will become apparent, as highlighted below:\nAs you can see, there is no option to select the Subject entity for querying, compounding any attempts to get them exported out of the application. Fortunately, there is a way to get overcome this via the Configuration Migration tool. This has traditionally been bundled together as part of the applications Solution Developer Kit (SDK). The latest version of the SDK for 8.2 of the application can be downloaded from Microsoft directly, but newer versions - to your delight or chagrin - are only available via NuGet. For those who are unfamiliar with using this, you can download version 9.0.2.3 of the Configuration Migration tool alone using the link below:\nMicrosoft.CrmSdk.XrmTooling.ConfigurationMigration.Wpf\nWith everything downloaded and ready to go, the steps involved in migrating Subject records between different D365CE environments are as follows:\nThe first step before any export can take place is to define a Schema - basically, a description of the record types and fields you wish to export. Once defined, schemas can be re-used for future export/import jobs, so it is definitely worth spending some time defining all of the record types that will require migration between environments. Select Create schema on the CRM Configuration Migration screen and press Continue. Login to D365CE using the credentials and details for your specific environment. After logging in and reading your environment metadata, you then have the option of selecting the Solution and Entities to export. A useful aspect to all of this is that you have the ability to define which entity fields you want to utilise with the schema and you can accommodate multiple Entities within the profile. For this example, we only want to export out the Subject entity, so select the Default Solution, the entity in question and hit the Add Entity \u0026gt; button. Your window should resemble the below if done correctly: With the schema fully defined, you can now save the configuration onto your local PC. After successfully exporting the profile, you will be asked whether you wish to export the data from the instance you are connected to. Hit Yes to proceed. At this point, all you need to do is define the Save to data file location, which is where a .zip file containing all exported record data will be saved. Once decided, press the Export Data button. This can take some time depending on the number of records being processed. The window should update to resemble the below once the export has successfully completed. Select the Exit button when you are finished to return to the home screen. You have two options at this stage - either you can either exit the application entirely or, if you have your target import environment ready, select the Import data and Continue buttons, signing in as required. All that remains is to select the .zip file created in step 5), press the Import Data button, sit back and confirm that all record data imports successfully. It\u0026rsquo;s worth noting that this import process works similarly to how the in-application Import Wizard operates with regards to record conflicts; namely, if a record with the same GUID value exists in the target instance, then the above import will overwrite the record data accordingly. This can be helpful, as it means that changes to records such as the Subject entity can be completed safely within a development context and promoted accordingly to new environments.\nThe Configuration Migration tool is incredibly handy to have available but is perhaps not one that it is shouted from the rooftops that often. It\u0026rsquo;s usefulness not just extends to the Subject entity, but also when working with the other entity types discussed at the start of this post. Granted, if you do not find yourself working much with Processes that reference these so-called \u0026ldquo;configuration\u0026rdquo; records, then introducing the above step as part of any release management process could prove to be an unnecessary administrative burden. Regardless, there is at least some merit to factor in the above tool as part of an initial release of a D365CE solution to ensure that all development-side configuration is quickly and easily moved across to your production environment.\n","date":"2018-05-06T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/importing-exporting-subject-records-between-dynamics-365-customer-engagement-environments/","title":"Importing/Exporting Subject Records between Dynamics 365 Customer Engagement Environments"},{"content":"When it comes to maintaining any kind of Infrastructure as a Service (IaaS) resource on a cloud provider, the steps involved are often more complex when compared with equivalent Platform as a Service (PaaS) offerings. This is compensated for by the level of control IaaS resources typically grant over the operating system environment and the applications that reside herein. This can be useful if, for example, your application needs to maintain a specific version of a framework/programming language and you do not want your chosen cloud provider to patch this behind the scenes, without your knowledge. One of the major trade-offs as part of all this, however, is the expectation that completing a comprehensive disaster recovery plan is no longer such a cakewalk, requiring instead significant effort to design, implement and test on regular intervals.\nMicrosoft Azure, like other cloud providers, offer Virtual Machines as their most \u0026ldquo;freest\u0026rdquo; IaaS offering. This facilitates a whole breadth of customisation options for the underlying operating system, including the type (Windows or Linux), default software deployed and underlying network configuration. The same problems - with respect to disaster recovery - still exist and may even be compounded if your Virtual Machine is host to an application that is publically available across the internet. Whilst you are able to make a copy of your VM somewhat quickly, there is no easy way to migrate across the public IP address of a Virtual Machine without considerable tinkering in the portal. This can lead to delays in initiating any failover or restore action, as well as the risk of introducing human error into the equation.\nFortunately, with a bit of PowerShell scripting, it is possible to fully automate this process. Say, for example, you need to restore a Virtual Machine using Managed Disks to a specific snapshot version. ensuring that the network configuration is mirrored and copied across to the new resource. The outline steps would look like this when getting things scripted out in PowerShell:\nLogin to your Azure account and subscription where the VM resides. Create a new Managed Disk from a Recovery Services Vault snapshot. Obtain the deployment properties of the existing Virtual Machine and utilise this for the baseline configuration of the new Virtual Machine. Associate the newly created Managed Disk with the configuration created in step 3. Create a placeholder Public IP Address and swap this out with the existing Virtual Machine. Define a new Network Interface for the configuration created in step 3 and associate the existing Public IP Address to this. Create a new Network Security Group for the Network Interface added in step 6, copying all rules from the existing Virtual Machine Network Security Group Create the new Virtual Machine from the complete configuration properties. With all these steps completed, a consistent configuration is defined to create a Virtual Machine that is almost indistinguishable from the existing one and which, more than likely, has taken less than 30 minutes to create. 🙂 Let\u0026rsquo;s jump in and take a look at an outline script that will accomplish all of this.\nBefore we begin\u0026hellip; One of the pre-requisites for executing this script is that you are have backed up your Virtual Machine using Recovery Services Vault and performed a recovery of a previous image snapshot to a Storage Account location. The below script also assumes the following regarding your target environment:\nYour Virtual Machine must be using Managed Disks and has only one Operating System disk attached to it. The affected Virtual Machine must be switch off and in a Stopped (deallocated) state on the platform. The newly created Virtual Machine will reside in the same Virtual Network as the existing one. The existing Network Security Group for the Virtual Machine utilises the default security rules added upon creation That\u0026rsquo;s enough talking now! Here\u0026rsquo;s the script: #Specify variables below: #subID: The subscription GUID, obtainable from the portal #rg: The Resource Group name of where the Virtual Machine is located #vmName: The name of the Virtual Machine #vhdURI: The URL for the restored Virtual Hard Disk (VHD) #diskName: The name for the newly created managed disk #location: Location for the newly created managed disk #storageAccountName: Name of the storage account where the restored VHD is stored #storageAccountID: The Resource ID for the storage account where the VHD is stored #containerName: Name of the container where the restored VHD is stored #blobName: Name of the restored VHD config file, in JSON format #oldNICName: Name of the existing VM Network Interface Card #newNICName: Name of the Network Interface Card to be created for the copied VM #newPIPName: Name of the new Public IP Address that will be swapped with the existing one #oldPIPName: Name of the existing Public IP Address that will be swapped out with the new one. #vnetName: Name of the Virtual Network used with the current Virtual Machine #vnetSubnet: Name of the subnet on the Virtual Network used with the current Virtual Machine. #$oldNSG: Name of the existing Network Security Group for the Virtual Machine #$newNSG: Name for the newly created Network Security Group for the new Virtual Machine #$desinationPath: Path for the VM config file to be downloaded to #$ipConfig: Name of the IP config used for the Virtual Network $subID = \u0026#39;8fb17d52-b6f7-43e4-a62d-60723ec6381d\u0026#39; $rg = \u0026#39;myresourcegroup\u0026#39; $vmName = \u0026#39;myexistingvm\u0026#39; $vhdURI = \u0026#39;https://mystorageaccount.blob.core.windows.net/vhde00f9ddadb864fbbabef2fd683fb350d/bbc9ed4353c5465782a16cae5d512b37.vhd\u0026#39; $diskName = \u0026#39;mymanagedisk\u0026#39; $location = \u0026#39;uksouth\u0026#39; $storageAccountName = \u0026#39;mystorageaccount\u0026#39; $storageAccountID = \u0026#39;/subscriptions/5dcf4664-4955-408d-9215-6325b9e28c7c/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount\u0026#39; $containerName = \u0026#39;vhde00f9ddadb864fbbabef2fd683fb350d\u0026#39; $blobName = \u0026#39;config9064da15-b889-4236-bb8a-38670d22c066.json\u0026#39; $newVMName = \u0026#39;mynewvm\u0026#39; $oldNICName = \u0026#39;myexistingnic\u0026#39; $newNICName = \u0026#39;mynewnic\u0026#39; $newPIPName = \u0026#39;myexistingpip\u0026#39; $oldPIPName = \u0026#39;mynewpip\u0026#39; $vnetName = \u0026#39;myexistingvnet\u0026#39; $vnetSubnet = \u0026#39;myexistingsubnet\u0026#39; $oldNSG = \u0026#39;myexistingnsg\u0026#39; $newNSG = \u0026#39;mynewnsg\u0026#39; $destinationPath = \u0026#39;C:\\vmconfig.json\u0026#39; $ipConfig = \u0026#39;myipconfig\u0026#39; #Login into Azure and select the correct subscription Login-AzureRmAccount Select-AzureRmSubscription -Subscription $subID #Get the VM properties that requires restoring - used later. $vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rg #Create managed disk from the storage account backup. $diskConfig = New-AzureRmDiskConfig -AccountType \u0026#39;StandardLRS\u0026#39; -Location $location -CreateOption Import -StorageAccountId $storageAccountID -SourceUri $vhdURI $osDisk = New-AzureRmDisk -Disk $diskConfig -ResourceGroupName $rg -DiskName $diskName #Download VM configuration file and define new VM configuration from this file Set-AzureRmCurrentStorageAccount -Name $storageAccountName -ResourceGroupName $rg Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destinationPath $obj = ((Get-Content -Path $destinationPath -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json $newVM = New-AzureRmVMConfig -VMSize $obj.\u0026#39;properties.hardwareProfile\u0026#39;.vmSize -VMName $newVMName #Add newly created managed disk to new VM config Set-AzureRmVMOSDisk -VM $newVM -ManagedDiskId $osDisk.Id -CreateOption \u0026#34;Attach\u0026#34; -Windows #Create new Public IP and swap this out with existing IP Address $pip = New-AzureRmPublicIpAddress -Name $newPIPName -ResourceGroupName $rg -Location $location -AllocationMethod Static $vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rg $subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $vnetSubnet -VirtualNetwork $vnet $oldNIC = Get-AzureRmNetworkInterface -Name $oldNICName -ResourceGroupName $rg $oldNIC | Set-AzureRmNetworkInterfaceIpConfig -Name $ipConfig -PublicIpAddress $pip -Primary -Subnet $subnet $oldNIC | Set-AzureRmNetworkInterface #Define new VM network configuration settings, using existing public IP address, and add to configuration $existPIP = Get-AzureRmPublicIpAddress -Name $oldPIPName -ResourceGroupName $rg $nic = New-AzureRmNetworkInterface -Name $newNICName -ResourceGroupName $rg -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $existPIP.Id $newVM = Add-AzureRmVMNetworkInterface -VM $newVM -Id $nic.Id #Obtain existing Network Security Group and create a copy of it for the new Network Interface Card $nsg = Get-AzureRmNetworkSecurityGroup -Name $oldNSG -ResourceGroupName $rg $newNSG = New-AzureRmNetworkSecurityGroup -Name $newNSG -ResourceGroupName $rg -Location $location foreach($rule in $nsg.SecurityRules) { $newNSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Access $rule.Access -Protocol $rule.Protocol -Direction $rule.Direction -Priority $rule.Priority -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange | Set-AzureRmNetworkSecurityGroup } $nic.NetworkSecurityGroup = $newNSG $nic | Set-AzureRmNetworkInterface #Create the VM. This may take some time to complete. New-AzureRmVM -ResourceGroupName $rg -Location $location -VM $newVM Conclusions or Wot I Think Automation should be a key driver behind running an effective business and, in particular, any IT function that exists within. When architected prudently, repetitive and time wasting tasks can be eliminated and the ever pervasive risk of human error can be eliminated from business processes (unless, of course, the person defining the automation has made a mistake 🙂 ). The management of IaaS resources fits neatly into this category and, as I hope the example in this post has demonstrated, can take a particularly onerous task and reduce the complexity involved in carrying it out. This can help to save time and effort should the worst ever happen to your application. When compared with other cloud vendors, this is what ultimately makes Azure a good fit for organisations who are used to working with tools such as PowerShell; scenarios like this become almost a cakewalk to set up and require minimal additional study to get up and running.\n","date":"2018-04-29T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/re-creating-a-virtual-machine-with-managed-disks-using-the-same-ip-address-microsoft-azure/","title":"Re-creating a Virtual Machine with Managed Disks using the same IP Address (Microsoft Azure)"},{"content":"Typically, when working with Dynamics 365 for Customer Service entities, you expect a certain type of behaviour. A good example of this in practice is entity record activation and the differences between Active and Inactive record types. In simple terms, you are generally restricted in the actions that can be performed against an Inactive record, most commonly being the modification of a field value. You can, however, perform actions such as deleting and reassigning records to other users in the application. The latter of these can be particularly useful if, for example, an individual leaves a business and you need to ensure that another employee requires access to old, inactive records.\nWhen it comes to reporting on time intervals for when a record was last changed, I often - rightly or wrongly - see the Modified On field used for this purpose. This, essentially, stores the date and time of when the record was\u0026hellip;well\u0026hellip;last modified in the system! Since only more recent changes to the application have facilitated an alternative approach in reporting a record\u0026rsquo;s age and its current stage within a process, it is perhaps understandable why this field is often chosen when attempting to report, for example, the date on which a record was moved to Inactive status. Where you may encounter issues with this is if you are working in a similar situation highlighted above - namely, an individual leaving a business - and you need to reassign all of the inactive records owned by them. Doing either of these steps will immediately update the Modified On value to the current date and time, skewering any dependent reporting.\nFortunately, there is a way of getting around this, if you don\u0026rsquo;t have any qualms about opening up Visual Studio and putting together a plug-in in C#. Via this route, you can prevent the Modified On value of a record from being updated by capturing the original value and forcing the platform to commit this value to the database during the Pre-Operation stage of the transaction, as opposed to the date and time of when the record is reassigned. In fact, using this method, you can set the Modified On value to be whatever you want. Here\u0026rsquo;s the code that illustrates how to achieve both scenarios:\nusing System; using Microsoft.Xrm.Sdk; namespace Sample.OverrideCaseModifiedDate { public class PreOpCaseAssignOverrideModifiedDate : IPlugin { public void Execute(IServiceProvider serviceProvider) { //Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); //Extract the tracing service for use in debugging sandboxed plug-ins ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace(\u0026#34;Tracing implemented successfully!\u0026#34;); if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) \u0026amp;\u0026amp; context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Entity incident = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; Entity preIncident = context.PreEntityImages[\u0026#34;preincident\u0026#34;]; //At this stage, you can either get the previous Modified On date value via the Pre Entity Image and set the value accordingly... incident[\u0026#34;modifiedon\u0026#34;] = preIncident.GetAttributeValue\u0026lt;DateTime\u0026gt;(\u0026#34;modifiedon\u0026#34;); //Or alternatively, set it to whatever you want - in this example, we get the Pre Entity Image createdon value, add on an hour and then set the value DateTime createdOn = preIncident.GetAttributeValue\u0026lt;DateTime\u0026gt;(\u0026#34;createdon\u0026#34;); TimeSpan time = new TimeSpan(1, 0, 0); DateTime newModifiedOn = createdOn.Add(time); incident[\u0026#34;modifiedon\u0026#34;] = newModifiedOn; } } } } When deploying out your plug-in code to the application (something that I hope regular readers of the blog will be familiar with), make sure that the settings you configure for your Step and the all-important Entity image resemble the images below:\nNow, the more eagle-eyed readers may notice that the step is configured on the Update as opposed to the Assign message, which is what you (and, indeed, I when I first started out with this) may expect. Unfortunately, because the Input Parameters of the Assign message only returns two Entity Reference objects - the incident (Case) and systemuser (User) entities, respectively - as opposed to an Entity object for the incident entity, we have no means of interfering with the underlying database transaction to override the required field values. The Update message does not suffer from this issue and, by scoping the plug-in\u0026rsquo;s execution to the ownerid field only, we can ensure that it will only ever trigger when a record is reassigned.\nWith the above plug-in configured, you have the flexibility of re-assigning your Case records without updating the Modified On field value in the process or expanding this further to suit whatever business requirement is needed. In theory, as well, the approach used in this example could also be applied to other what we may term \u0026ldquo;system defined fields\u0026rdquo;, such as the Created On field. Hopefully, this post may prove some assistance if you find yourself having to tinker around with inactive Case records in the future.\n","date":"2018-04-22T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/override-the-modified-on-field-value-when-reassigning-resolved-cases-dynamics-365-for-customer-service/","title":"Override the Modified On Field Value When Reassigning Resolved Cases (Dynamics 365 for Customer Service)"},{"content":"The ability to develop a custom barcode scanning application, compatible with either a specific mobile, tablet or other device type, would traditionally be in the realm for a skilled developer to implement fully. The key benefit this brings is the ability to develop a highly tailored business application, with almost limitless potential. The major downside is, of course, the requirement for having this specific expertise within a business and sufficient time to plan, develop and test the application.\nWhen it comes to using some of the newly introduced weapons in the Microsoft Online Services arsenal (such as Microsoft Flow, a topic which has been a recent focus on the blog), these traditional assumptions around application development can be discarded and empowering tools are provided to developers \u0026amp; non-developers to enable them to create highly custom business application without writing a single line of code. A perfect example of what can be achieved as a consequence has already been hinted at. With PowerApps, you have the ability to create a highly functional barcode scanning application that can perform a variety of functions - register event attendees details into a SQL database, lookup a Product stored on Dynamics 365 Customer Engagement and much, much more. The assumingly complex becomes bafflingly simplistic.\nI\u0026rsquo;ve been lucky to (finally!) have a chance to play around with PowerApps and the barcode scanning control recently. What I wanted to do as part of this week\u0026rsquo;s blog post was provide a few tips on using this control, with the aim of easing anyone else\u0026rsquo;s journey when using it for the first time.\nMake sure your barcode type is supported - and factor in multiple barcode types, if required The one thing you have to get your head around fairly early on is that the barcode control can only support 1 barcode type at any given time. At the time of writing this post, the following barcode types are supported:\nCodabar Code 128 Code 39 EAN Interleaved 2 of 5 UPC Note in particular the lack of current support for QR codes, which are essentially a barcode on steroids. There are, however, plans to introduce this as an option in the very near future, so we won\u0026rsquo;t have to wait too long for this.\nBe sure to RTFM It can be easy to forget this rather forceful piece of advice, but the importance of familiarising yourself with the barcode control, its properties and some of the acknowledged limitations can go a long way towards streamlining the deployment of a workable barcode scanner app. I\u0026rsquo;ve collated together a list of the most useful articles below, which I highly encourage you to read through in their entirety:\nCreate a barcode scanning app in minutes! Scan a barcode in Microsoft PowerApps Barcode scanner control (experimental) in PowerApps Whilst PowerApps is generally quite a good tool to jump into without prerequisite knowledge, working with specific controls - such as this one - warrants a much more careful study to ensure you fully understand what is and isn\u0026rsquo;t supported. What\u0026rsquo;s also quite helpful about the articles (jumping back to barcode types for a second) is that an example is provided on how to create a drop-down control to change the type of barcode that can be scanned at runtime. Typically, for an internal app where a defined barcode type is utilised consistently, the requirement for this may not be present; but it is nice to know the option is there.\nFor the best success rate when scanning, ensure the barcode control occupies as much space as possible On one of the example posts above, an example is provided of a barcode control situated atop a Gallery control with a search box. Whilst this may look appealing from a usability aspect, your main issue will be in ensuring that devices using the app can properly scan any potential barcode. For example, when testing iOS devices in this scenario, I found it literally impossible to ensure a consistent scan rate. I\u0026rsquo;ve not tested with other mobile devices to confirm the same issue occurs, but given that the issue on iOS appears to be down to a \u0026ldquo;memory issue\u0026rdquo;, I can imagine the same problem occurring for lower spec Android devices as well. Microsoft\u0026rsquo;s recommendation for optimal barcode scanning controls provides a workaround solution that can be adapted:\nTo delay running out of memory on devices that are running iOS, set the Height property of the Barcode control to 700 (or lower) and the Scanrate property to 30.\nWith this in mind, you can create a dedicated screen on your PowerApp for barcode scanning that occupies sufficient space, whilst also providing an easy to operate experience for your end users. An optimal layout example is provided below:\nThen, with a bit of expression/formula trickery, functionality for this screen can be nicely rounded out as follows:\nSet the Default property value of the Text Input control to the name of the scanner control. This will then place the value of any scanned barcode within the input control, allowing the user to modify if necessary. For the Scan button, define the Back() expression for the OnSelect action. Then, set the Default property value to the Text Input control on the screen above to whatever control needs to have the value of the scanned barcode control (e.g. if working with a gallery/search box, then you would set this on the search box). Finally, the Cancel button needs to do the same as the Scan button, in addition to safely discarding whatever has been scanned. This can be achieved by specifying the Clear command for the control that is being passed the scanned value when the Scan button is selected. Rather handily, we can specify multiple expression/formulas for a single event via the use of a semi-colon delimiter. Therefore, to go back to the previous screen and clear the value of the Text Input control called MySearchBox, use the following expression: Back();MySearchBox.Clear By having a barcode control that mimics the size indicated above, scan rates for iOS devices are improved by a significant margin and the control becomes a dream to use 🙂\nAnd the rest\u0026hellip; To round things off, there are a couple of other concepts surrounding barcode controls that I thought may be useful to point out:\nThe Camera property value is not documented at all for this control, but my understanding is that it provides a mechanism for specifying the device camera number/ID to use for the control. So, for example, if you have a Windows 10 tablet device and want to ensure that the front camera is used to scan barcodes for an entry registration system, you can use this option to force this. Utilising this setting, I would imagine, limits your app across multiple device types, so I would advise caution in its use. Let me know in the comments below if you have any joy using this property at all. By default, the Barcode detection setting is enabled. This displays a yellow box when a possible scan area is detected and also a red line to indicate when a barcode value is read successfully. This setting is designed to assist when scanning, but I have noticed it can be erratic from time to time. It can be disabled safely without reducing the scanning functionality too much; you just won\u0026rsquo;t know straight away when you have scanned something successfully. As the control is reliant on individual device camera privacy settings, you will need to ensure that any corporate device policies do not block the usage of the camera whilst using PowerApps and, in addition, you will need to give explicit camera access permission when launching the app for the first time. I did encounter a strange issue on iOS where the camera did not load after granting this permission. A device reboot seemed to resolve the issue. Conclusions or Wot I Think I am a strong believer in providing fast and effective resolution of any potential business requirement or concern. Typically, those working within an IT department will not be directly responsible for generating revenue or value for the business beyond IT support. This is what makes an attack-focused, proactive and value generating outlook an essential requirement, in my view, to anyone working in an IT-focused role. In tandem with this approach, having the tools at your disposal that will aid you in adhering to these tenents are ones that should be adopted readily and, after prudent assessment (i.e. balanced and not time costly), without fear. PowerApps, and the use cases hinted towards within this post, scores very highly on my list of essential tools in fostering this approach. It really ensures that typical \u0026ldquo;Power\u0026rdquo; users within an organisation can develop quick and easy solutions, with lack of previous experience not necessarily being a burden. In my own case, having had a firm background using Microsoft stack solutions in the past meant that my initial journey with PowerApps was that much easier. Ultimately, I think, PowerApps aims to save time and reduce the business concerns that arise from bloated software deployments, putting a much more business-focused onus on individuals to develop valuable solutions within a particular department or organisation.\n","date":"2018-04-15T00:00:00Z","image":"/images/PowerApps-FI.png","permalink":"/posts/working-with-barcodes-on-a-powerapp-a-few-tips/","title":"Working with Barcodes on a PowerApp: A Few Tips"},{"content":"This may be somewhat obvious from some of my more recent blog posts, but I am a huge fan of Application Insights at the moment. The best analogy I can use for it is that it\u0026rsquo;s a web developer Swiss Army Knife, that really puts you in the driving seat of understanding and maintaining your web application for the long term. The analytic tools available on the \u0026ldquo;knife\u0026rdquo; - from error reporting, user trends and server performance - provide a treasure trove of information that can help to shape successful software deployments and ensure that developers are properly understanding the impact of their work within a production environment. If you are developing web applications utilising the Azure stack, there really is no good reason not to use it; and, even if you are not, you can still utilise the tool wherever your web application is hosted.\nRecently, whilst fiddling with the tool, I was surprised to find another feature which I had overlooked - Availability testing. In simple terms, this enables you to configure proactive monitoring of your web application to make sure it is responding to requests. This can be particularly useful if you have availability targets or SLAs that your web application needs to adhere to. The feature is also tightly integrated alongside the other performance metrics gathered by the service, making it the ideal choice for satisfying any data analytic requirements. I wasn\u0026rsquo;t aware of the feature until I implemented it recently, so what I wanted to do on this weeks blog post is demonstrate how straightforward it is to set up Availability testing within Application Insights.\nSetting up your first Availability Test What ultimately aids the journey towards configuring for your first test is the simplistic nature of the process; however, there are a couple of settings that may require further explanation. The remainder of this section will go through the steps involved from start to finish. Be sure to have provisioned an Application Insights resource before starting and confirm that your target website is readily accessible over the internet.\nOnce your Application Insights resource is provisioned, navigate to the Availability tab on the left-hand side of the resource blade:\nThis will then expand the Availability test summary window, which should be blank as indicated below. Click on the Add Test button to begin creating the test:\nThere are a couple of basic information fields that need populating at this juncture:\nTest name: This shouldn\u0026rsquo;t need explaining 🙂 Keep in mind the importance of having a unique name for the test, particularly if you are planning on utilising the reporting capabilities discussed later on in this post. Test type: You have two options at your disposal here - a simple URL ping test, that contacts the URL you specify and captures the response code from the server, or a Multi-step web test, a more powerful test that carries out a set of predefined steps when crawling through your website. Visual Studio Enterprise is required to put these tests together and there are pricing implications when using it. You can find out more about this option via the Microsoft Docs website. For this example, we are going to the use the URL ping test. URL: This is the exact URL that will be contacted as part of the test. This will either be the root azurewebsites.net domain or your assigned custom domain. Your Create test window should resemble the below after configuring the above:\nThe next step is to define the location where your test will be triggered from. If you are hosting your application in an existing Azure region, then it is recommended to select the next nearest geographical region for testing. Multiple regions can also be selected. In the example below, UK West is selected due to the website in question existing in the UK South region:\nNext is to define your success/fail conditions for the test. This will vary depending on the type of test you a conducting, but for URL ping test\u0026rsquo;s, you can define a timeout period for the test and also determine whether a specific HTTP response code is returned each time the test is executed. A 200 response code is the standard for indicating that a web page has loaded successfully, so this is the one that will be used for this example. You can also define this further by checking for specific keywords in the response text:\nFinally, you can designate the alert settings for when a failure is detected. This can range from an email to a specified list of contacts (more on this shortly) through to a webhook notification to an external HTTP/HTTPS endpoint. Example settings are utilised in the below screenshot that should be suitable for most scenarios:\nWith everything configured, your Create test window should resemble the below if done correctly:\nPress OK to create your test, which will begin executing immediately. You will need to wait some time for results to start appearing. For this example, we are attempting to query a website that has a specific deny rule for any non-allowed host, including the Availability test server. This is reflected in the results below, which indicate a 100% failure rate for the selected period:\nFailed Test Notifications As part of the example above, the test is configured to send email notifications out to Azure subscription owners when a fault with the application is detected. An example of how this looks can be seen below:\nWhilst the above is useful for application developers/administrations to receive proactive notifications relating to their web applications, it doesn\u0026rsquo;t provide much from a reporting standpoint. Fortunately, this is where one of the features major benefits come into the equation.\nAnalysing Previous Availability Tests Similar to other statistical information gathered by Application Insights, the results of each individual test are logged within Query Analytics as part of the availabilityResults schema. The following query can be used within Query Analytics to return key information for failed tests within a 24 hour period:\navailabilityResults | project timestamp, id, name, location, success, message, duration, appId, appName | where success == 0 and timestamp \u0026gt; ago(24h) If you are also wanting to utilise the Continuous Export feature, as discussed previously on the blog, then the great news is that this information is also fully exportable to your destination of choice. A SQL database, Power BI Dashboard, Azure Event Hub\u0026hellip;your options are quite limitless 🙂 Below is the query that you can use to extract the key information from your Continuous Export stream:\nSELECT availabilityFlat.ArrayValue.testTimestamp AS TestTimestamp, availabilityFlat.ArrayValue.testRunId AS TestRunID, availabilityFlat.ArrayValue.testName AS TestName, availabilityFlat.ArrayValue.runLocation AS RunLocation, availabilityFlat.ArrayValue.result AS TestResult, availabilityFlat.ArrayValue.message AS Message, availabilityFlat.ArrayValue.durationMetric.value AS Duration, AR.EventProcessedUTCTime AS AvailabilityResultEventProcessedUTCTime INTO AvailabilityResultsOutput FROM AvailabilityResults AS AR CROSS APPLY GetElements(AR.[availability]) AS availabilityFlat Conclusions or Wot I Think The best things in life are those that just keep on giving. My journey with Application Insights to date very much mirrors this. The number of business challenges and scenarios that I have been able to chuck towards it and walk away with an effective and easy-to-implement solution is growing every month. For me, it is slowly becoming the de facto tool to have deployed alongside any new web application. The ability to extend the tool further so that is not just providing ongoing benefit but proactive capabilities, via the Availability feature, is the main area where I feel the tool thrives the most, both now and in the future. Anything that can take the headache out of diagnosing enterprise-grade web application systems, whilst simultaneously driving insight into how a website experience can be improved for end users, wins in my book and Application Insights keeps continually proves itself in this regard.\n","date":"2018-04-08T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/utilising-availability-testing-within-application-insights/","title":"Utilising Availability Testing within Application Insights"},{"content":"If you are looking for an easy-to-use and highly expandable mail relay service, SendGrid represents the most developer-friendly solution out in the market today. What\u0026rsquo;s even better is that it\u0026rsquo;s available on Azure, making it the ideal choice if you are developing an existing solution on the Azure stack. The best thing I like about the service is the extensive documentation covering every aspect of its Web API, structured to provide a clear explanation of endpoint methods, required properties, and example outputs - exactly the right way that all technical documentation should be laid out.\nI recently had a requirement to integrate with the SendGrid API to extrapolate email statistic information into a SQL database. My initial thoughts were that I would need to resort to a bespoke C# solution to achieve these requirements. However, keenly remembering my commitment this year to find opportunities to utilise the service more, I decided to investigate whether Microsoft Flow could streamline this process. Suffice to say, I was pleasantly surprised and what I wanted to do as part of this week\u0026rsquo;s blog post was demonstrate how I was able to take advantage of Microsoft Flow to deliver my requirements. In the process, I hope to get you thinking about how you approach integration requirements in the future, challenging some of the pre-conceptions around this.\nBefore we get into creating the Flow itself\u0026hellip; \u0026hellip;you will need to create a table within your SQL database to store the requisite data. This script should do the trick:\nCREATE TABLE [dbo].[SendGridStatistic] ( [SendGridStatisticUID] [uniqueidentifier] NULL DEFAULT NEWID(), [Date] DATE NOT NULL, [CategoryName] VARCHAR(50) NULL, [Blocks] FLOAT NOT NULL, [BounceDrops] FLOAT NOT NULL, [Bounces] FLOAT NOT NULL, [Clicks] FLOAT NOT NULL, [Deferred] FLOAT NOT NULL, [Delivered] FLOAT NOT NULL, [InvalidEmail] FLOAT NOT NULL, [Opens] FLOAT NOT NULL, [Processed] FLOAT NOT NULL, [SpamReportDrops] FLOAT NOT NULL, [SpamReports] FLOAT NOT NULL, [UniqueClicks] FLOAT NOT NULL, [UniqueOpens] FLOAT NOT NULL, [UnsubscribeDrops] FLOAT NOT NULL, [Unsubscribes] FLOAT NOT NULL ) A few things to point out with the above:\nThe CategoryName field is only required if you are wishing to return statistic information grouped by category from the API. The example that follows primarily covers this scenario, but I will also demonstrate how to return consolidated statistic information as well if you wanted to exclude this column. Microsoft Flow will only be able to map the individual statistic count values to FLOAT fields. If you attempt to use an INT, BIGINT etc. data type, then the option to map these fields will not appear. Kind of annoying, given that FLOATs are effectively \u0026ldquo;dirty\u0026rdquo;, imprecise numbers, but given the fact we are not working with decimal numbers, this shouldn\u0026rsquo;t cause any real problems. The SendGridStatisticUID is technically optional and could be replaced by an INT/IDENTITY seed instead or removed entirely. Remember though that it is always good practice to have a unique column value for each table, to aid in individual record operations. In addition, you will also need to ensure you have generated an API key for SendGrid that has sufficient privileges to access the Statistic information for your account.\nWith everything ready, we can now \u0026ldquo;Flow\u0026rdquo; quite nicely into building things out. The screenshot below demonstrates how the completed Flow should look from start to finish. The sections that follow will discuss what is required for each constituent element\nRecurrence The major boon when working with Flow is the diverse options you have for triggering them - either based on certain conditions within an application or just simply based off a recurring schedule. For this example, as we will be extracting statistic information for an entire 24 period, you should ensure that the Flow executes at least once daily. The precise timing of this is up to you, but for this example, I have suggested 2 AM local time each day. The configured recurrence settings should resemble the below if done correctly:\nYou should be aware that when your Flow is first activated, it will execute straightaway, regardless of what settings you have configured above.\nHTTP As the SendGrid Web API is an HTTP endpoint, we can utilise the built-in HTTP connector to retrieve the information we need. This is done via a GET operation, with authentication achieved via a Raw header value containing the API key generated earlier. The tricky bit comes when building the URI and how we want the Flow to retrieve our information - namely, all statistic information covering the previous day. There is also the (optional) requirement of ensuring that statistic information is grouped by category when retrieved. Fortunately, we can get around this problem by using a bit of Expression trickery to build a dynamic URI value each time the Flow is executed. The expression code to use will depend on whether or not you require category grouping. I have provided both examples below, so simply choose the one that meets your specific requirement:\nRetrieve Consolidated Statistics concat(\u0026#39;https://api.sendgrid.com/v3/stats?start_date=\u0026#39;, string(getPastTime(1, \u0026#39;day\u0026#39;, \u0026#39;yyyy-MM-dd\u0026#39;)), \u0026#39;\u0026amp;end_date=\u0026#39;, string(getPastTime(1, \u0026#39;day\u0026#39;, \u0026#39;yyyy-MM-dd\u0026#39;))) Retrieve Statistics Grouped By Category concat(\u0026#39;https://api.sendgrid.com/v3/categories/stats?start_date=\u0026#39;, string(getPastTime(1, \u0026#39;day\u0026#39;, \u0026#39;yyyy-MM-dd\u0026#39;)), \u0026#39;\u0026amp;end_date=\u0026#39;, string(getPastTime(1, \u0026#39;day\u0026#39;, \u0026#39;yyyy-MM-dd\u0026#39;)), \u0026#39;\u0026amp;categories=cat1\u0026amp;categories=cat2\u0026#39;) Note: For this example, statistic information would be returned only for the categories that equal cat1 \u0026amp; cat2. These should be updated to suit your requirements, and you can add on additional categories by extending the URI value like so: \u0026amp;categories=cat3\u0026amp;categories=cat4 etc.\nYour completed HTTP component should resemble the below if done correctly. Note in particular the requirement to have Bearer and a space before specifying your API key:\nParse JSON A successful 200 response to the Web API endpoint will return a JSON object, listing all statistic information grouped by date (and category, if used). I always struggle when it comes to working with JSON - a symptom of working too long with relational databases I think - and they are always challenging for me when attempting to serialize result sets. Once again, Flow comes to the rescue by providing a Parse JSON component. This was introduced with what appears to be little fanfare last year, but really proves its capabilities in this scenario. The only bit you will need to worry about is providing a sample schema so that the service can properly interpret your data. The Use sample payload to generate schema option is the surest way of achieving this, and you can use the example payloads provided on the SendGrid website to facilitate this:\nRetrieve Consolidated Statistics: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/global.html\nRetrieve Statistics Grouped By Category: https://sendgrid.com/docs/API_Reference/Web_API_v3/Stats/categories.html\nAn example screenshot is provided below in case you get stuck with this:\nGetting the records into the database Here\u0026rsquo;s where things get confusing\u0026hellip;at least for me when I was building out this flow for the first time. When you attempt to add in an Insert row step to the flow and specify your input from the Parse JSON step, Microsoft Flow will automatically add two Apply to each step to properly handle the input. I can understand why this is the case, given that we are working with a nested JSON response, but it does provide an ample opportunity to revisit an internet meme of old\u0026hellip;\nWith the above ready and primed, you can begin to populate your Insert row step. Your first step here will more than likely be to configure your database connection settings using the + Add New Connection option:\nThe nicest thing about this is that you can utilise the on-premise gateway service to connect to a non-cloud database if required. Usual rules apply, regardless of where your database is located - use a minimum privileged account, configure any required IP whitelisting etc.\nWith your connection configured, all that\u0026rsquo;s left is to provide the name of your table and then perform a field mapping exercise from the JSON response. If you are utilising the SendGridStatisticUID field, then this should be left blank to ensure that the default constraint kicks in correctly on the database side:\nThe Proof is in the Pudding: Testing your Flow All that\u0026rsquo;s left now is to test your Flow. As highlighted earlier in the post, your Flow will automatically execute after being enabled, meaning that you will be in a position to determine very quickly if things are working or not. Assuming everything executes OK, you can verify that your database table resembles the below example:\nThis example output utilises the CategoryName value, which will result in multiple data rows for each date, depending on the number of categories you are working with. This is why the SendGridStatisticUID is so handy for this scenario 🙂\nConclusions or Wot I Think When it came to delivering the requirements as set out in this posts introduction, I cannot overemphasise how much Microsoft Flow saved my bacon. My initial scoping exercise around this strongly led me towards having to develop a fully bespoke solution in code, with additional work than required to deploy this out to a dedicated environment for continuous execution. This would have surely led to:\nIncreased deployment time Additional cost for the provisioning of a dedicated execution environment Wasted time/effort due to bug-fixing or unforeseen errors Long-term problems resulting from maintaining a custom code base and ensuring that other colleagues within the business could properly interpret the code correctly. Instead, I was able to design, build and fully test the above solution in less than 2 hours, utilising a platform that has a wide array of documentation and online support and which, for our particular scenario, did not result in any additional cost. And this, I feel, best summarises the value that Microsoft Flow can bring to the table. It overturns many of the assumptions that you generally have to make when implementing complex integration requirements, allowing you to instead focus on delivering an effective solution quickly and neatly. And, for those who need a bit more punch due to very highly specific business requirements, then Azure Logic Apps act as the perfect meeting ground for both sides of the spectrum. The next time you find yourself staring down the abyss of a seemingly impossible integration requirement, take a look at what Microsoft Flow can offer. You just might surprise yourself.\n","date":"2018-04-01T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/using-microsoft-flow-to-extract-sendgrid-statistics-to-a-sql-database/","title":"Using Microsoft Flow to Extract SendGrid Statistics to a SQL Database"},{"content":"Very much like a stopped clock telling the correct time twice a day, you can guarantee there will be two Dynamics 365 Customer Engagement releases each year. The first such occasion this year has come around quickly, with Microsoft setting out the stall for the Spring 2018 release earlier this week. The headline messages around this release are all around providing reassurance that the application is GDPR ready and in emphasising the maturity of Power Apps \u0026amp; Microsoft Flow as products within their own right and in conjunction with Dynamics 365. I\u0026rsquo;ve been studying the release notes in greater detail and, as part of this week\u0026rsquo;s blog post, I wanted to delve underneath the headlines and extrapolate some of the less touted, but potentially most impactful, new features that I am most looking forward to.\nAnswer Tags for Voice of the Customer Surveys I made a commitment earlier this year to utilise the Voice of the Customer solution more. When used correctly, and if you are already heavily invested in Dynamics 365, the solution can present a straightforward and cost-effective way of starting to understand what customers are thinking, both with respect to specific experiences they have with your business and towards the organisation overall. One new feature to be introduced with Voice of the Customer, which I am looking forward to getting my hands on, is the ability to use Answer Tags to dynamically structure any subsequent questions within the survey. A good example of how this works in practice can be seen below, as shown in the release notes:\nThe key driver behind the automation of customer feedback tools should be to ensure that customers receive tailored and relevant surveys relating to services they have received, whilst also taking away any administrative headache when distributing and collating feedback answers. The feature above helps to solidify the benefits that Voice of the Customer can deliver when utilised in tandem with Dynamics 365 Customer Engagement, as well as allowing for more powerful and broadly applicable surveys to be structured at design time.\nThe rise of the Unified Interface The rebrand of the entire Dynamics 365 Customer Engagement application has been much promised and touted over the past year. With this release, it becomes a reality. Pretty much every key application module - Customer Service, Sales, Field Service \u0026amp; Project Service Automation - has been updated to utilise the new Unified Interface. The following applications/solutions will also be Unified Interface ready as part of the release:\nDynamics 365 App for Outlook LinkedIn Sales Navigator Gamification The Unified Interface is very much an offshoot of the Interactive Service Hub, which it now replaces fully as part of this release (Interactive Service Hub users should read this article carefully, as there are some important points to consider if you plan to upgrade in the near future). I saw the new unified interface in action when attending the CRMUG Meeting in Reading last year, and its introduction represents one of the ways Microsoft is investing heavily within the Dynamics 365 product moving forward. Its key benefits in comparison to the current experience can be summarised as follows:\nConsistent end-user experience when accessing the application from desktop, mobile or tablet operating systems. Fully mobile responsive template, that adjusts to your specific device to provide the optimal experience Better utilisation of empty spacing across entity views, forms etc. With this release, administrators and developers need to start actively considering the impact the Unified Interface has on their systems and plan accordingly. Whilst I imagine there to be some pain involved as part of this, the end result - a much crisper and effective end-user interface - is worth the trade-off.\nPowerShell Management for PowerApps Up until now, your options for the automation of administrative tasks for PowerApps were limited. This issue was addressed to a certain extent for Dynamics 365 Customer Engagement Online very recently, via the introduction of PowerShell modules to facilitate organisation backups, instance resets and/or administrative mode toggling. These types of tools can go a long way if you have implemented automated release management tools for your various environments, taking human error out of the equation and streamlining deployments.\nPowerApps looks to be going in the right direction in this regard, as the Spring Wave release will introduce a set of cmdlets that allow for the following actions to be accomplished:\nEnvironments and environment permissions PowerApps and app permission Flows and flow permissions Export and import of resource packages across environments PowerApps and Flow licenses report (of active users) Whilst definitely more administrative as opposed to deployment focused, their introduction is no doubt a welcome step in the right direction.\nFuture of the Common Data Service\nMicrosoft released the Common Data Service (CDS) in late 2016, around the same time as Microsoft Flow and the Dynamics CRM rebrand. The premise was simple and admirable: a common framework for you to develop the data you need for your business, that is instantly re-usable across multiple applications. My chief concern when this was first announced is where this left the traditional customisation experience for Dynamics CRM/365 Customer Engagement, commonly referred to as xRM. Having to countenance potential redevelopments of \u0026ldquo;legacy\u0026rdquo; xRM systems, just to make them compatible with the CDS could prove to be a costly and unnecessary exercise; this can perhaps be summed up best by the old saying \u0026ldquo;If it ain\u0026rsquo;t broke, don\u0026rsquo;t fix it!\u0026rdquo;.\nThere seems to have been a recognition of this dilemma as part of this release, with the following announcement regarding the Common Data Service and PowerApps specifically:\nThis release also includes major advancements to the Common Data Service for Apps (the data platform that comes with PowerApps) and client UX creation tools. These new capabilities are backward-compatible with the Dynamics 365 platform (frequently called the xRM platform), which means that Dynamics 365 customizers and partners can use already-acquired skills to create apps with PowerApps.\nWhat I think this means, in simple terms, is that the customisation experience between Dynamics 365 Customer Engagement and Power Apps will, in time, become virtually indistinguishable. And this is great for a number of reasons - it negates any excuse that individuals/organisations may raise to explore PowerApps further, gives us the ability to quickly develop our own custom mobile applications for our particular Dynamics 365 solution and provides an easy framework to unify business data across multiple applications. This very much parallels the intended experience that Power BI has for traditional Excel users - namely, providing an identical toolbox that can be leveraged to quickly deploy solutions with reduced technical debt. As with a lot of these announcements, we\u0026rsquo;re not going to know exactly how things operate until they are in our hands, but the immediate upshot appears to be the nullification of any new learning requirements for CDS.\nIf you are looking for further detail regarding this change, then the ever so excellent Jukka Niiranen has published a blog post which really breaks down the detail behind this better than I ever could 🙂\nhttp://survivingcrm.com/2018/03/yes-xrm-is-the-new-common-data-service/\nEmail Notifications for Microsoft Flow Failures Similar to Voice of the Customer, I also promised myself to use Microsoft Flow more this year. After some uneventful early testing, the tool has become (for me) an indisposable means of achieving integration requirements that would traditionally require custom code and a dedicated server environment to execute. Microsoft Flows do get some much-deserved love and attention as part of this release, and the one new feature which I think is going to be of the biggest help is email notifications for flow failures. The announced feature details are as follows:\nEnable email notifications to detect flow failures. To enable this feature, go to the Flow details page, and then, on the contextual menu (…), subscribe to receiving emails about flow failures. These useful email notifications provide:\nInformation about why your flow failed. Meaningful remediation steps. Additional resources to help you build robust flows that never fail. There\u0026rsquo;s so much more about this release that you could talk for days about\u0026hellip; \u0026hellip;but I would be unsure whether anyone would still be listening by the end! You can dive into the detail behind each of the above highlights and what else to expect in the next release by downloading the release notes yourself. Let me know in the comments below what you are looking forward to the most as part of the next release.\n","date":"2018-03-25T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/top-6-most-exciting-features-in-the-dynamics-365-spring-2018-release/","title":"Top 6 Most Exciting Features in the Dynamics 365 Spring 2018 Release"},{"content":"This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Custom Workflow Assembly. The video is part of my tutorial series on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:\nBelow you will find links to access some of the resources discussed as part of the video and to further reading topics:\nFull Code Sample using System; using System.Activities; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Workflow; using Microsoft.Xrm.Sdk.Query; namespace D365.SampleCWA { public class CWA_CopyQuote : CodeActivity { protected override void Execute(CodeActivityContext context) { IWorkflowContext c = context.GetExtension\u0026lt;IWorkflowContext\u0026gt;(); IOrganizationServiceFactory serviceFactory = context.GetExtension\u0026lt;IOrganizationServiceFactory\u0026gt;(); IOrganizationService service = serviceFactory.CreateOrganizationService(c.UserId); ITracingService tracing = context.GetExtension\u0026lt;ITracingService\u0026gt;(); tracing.Trace(\u0026#34;Tracing implemented successfully!\u0026#34;, new Object()); Guid quoteID = c.PrimaryEntityId; Entity quote = service.Retrieve(\u0026#34;quote\u0026#34;, quoteID, new ColumnSet(\u0026#34;freightamount\u0026#34;, \u0026#34;discountamount\u0026#34;, \u0026#34;discountpercentage\u0026#34;, \u0026#34;name\u0026#34;, \u0026#34;pricelevelid\u0026#34;, \u0026#34;customerid\u0026#34;, \u0026#34;description\u0026#34;)); quote.Id = Guid.Empty; quote.Attributes.Remove(\u0026#34;quoteid\u0026#34;); quote.Attributes[\u0026#34;name\u0026#34;] = \u0026#34;Copy of \u0026#34; + quote.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;name\u0026#34;); Guid newQuoteID = service.Create(quote); EntityCollection quoteProducts = RetrieveRelatedQuoteProducts(service, quoteID); EntityCollection notes = RetrieveRelatedNotes(service, quoteID); tracing.Trace(quoteProducts.TotalRecordCount.ToString() + \u0026#34; Quote Product records returned.\u0026#34;, new Object()); foreach (Entity product in quoteProducts.Entities) { product.Id = Guid.Empty; product.Attributes.Remove(\u0026#34;quotedetailid\u0026#34;); product.Attributes[\u0026#34;quoteid\u0026#34;] = new EntityReference(\u0026#34;quote\u0026#34;, newQuoteID); service.Create(product); } foreach (Entity note in notes.Entities) { note.Id = Guid.Empty; note.Attributes.Remove(\u0026#34;annotationid\u0026#34;); note.Attributes[\u0026#34;objectid\u0026#34;] = new EntityReference(\u0026#34;quote\u0026#34;, newQuoteID); service.Create(note); } } [Input(\u0026#34;Quote Record to Copy\u0026#34;)] [ReferenceTarget(\u0026#34;quote\u0026#34;)] public InArgument\u0026lt;EntityReference\u0026gt; QuoteReference { get; set; } private static EntityCollection RetrieveRelatedQuoteProducts(IOrganizationService service, Guid quoteID) { QueryExpression query = new QueryExpression(\u0026#34;quotedetail\u0026#34;); query.ColumnSet.AllColumns = true; query.Criteria.AddCondition(\u0026#34;quoteid\u0026#34;, ConditionOperator.Equal, quoteID); query.PageInfo.ReturnTotalRecordCount = true; return service.RetrieveMultiple(query); } private static EntityCollection RetrieveRelatedNotes(IOrganizationService service, Guid objectID) { QueryExpression query = new QueryExpression(\u0026#34;annotation\u0026#34;); query.ColumnSet.AllColumns = true; query.Criteria.AddCondition(\u0026#34;objectid\u0026#34;, ConditionOperator.Equal, objectID); query.PageInfo.ReturnTotalRecordCount = true; return service.RetrieveMultiple(query); } } } Download/Resource Links Visual Studio 2017 Community Edition\nSetup a free 30 day trial of Dynamics 365 Customer Engagement\nC# Guide (Microsoft Docs)\nSource Code Management Solutions\nVisual Studio Team Services - Free for up to 5 users and my recommended choice when working with Dynamics 365 Customer Engagement BitBucket GitHub Further Reading Microsoft Docs - Create a custom workflow activity\nMSDN - Register and use a custom workflow activity assembly\nMSDN - Update a custom workflow activity using assembly versioning (This topic wasn\u0026rsquo;t covered as part of the video, but I would recommend reading this article if you are developing an ISV solution involving custom workflow assemblies)\nMSDN - Sample: Create a custom workflow activity\nYou can also check out some of my previous blog posts relating to Workflows:\nImplementing Tracing in your CRM Plug-ins - We saw as part of the video how to utilise tracing, but this post goes into more detail about the subject, as well as providing instructions on how to enable the feature within the application (in case you are wondering why nothing is being written to the trace log 🙂 ). All code examples are for Plug-ins, but they can easily be repurposed to work with a custom workflow assembly instead. Obtaining the User who executed a Workflow in Dynamics 365 for Customer Engagement (C# Workflow Activity) - You may have a requirement to trigger certain actions within the application, based on the user who executed a Workflow. This post walks through how to achieve this utilising a custom workflow assembly. If you have found the above video useful and are itching to learn more about Dynamics 365 Customer Engagement development, then be sure to take a look at my previous videos/blog posts using the links below:\nDynamics 365 Customer Engagement Deep Dive: Creating a Basic Plug-in Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function Have a question or an issue when working through the code samples? Be sure to leave a comment below or contact me directly, and I will do my best to help. Thanks for reading and watching!\n","date":"2018-03-18T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-customer-engagement-deep-dive-creating-a-basic-custom-workflow-assembly/","title":"Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Custom Workflow Assembly"},{"content":"I started working with Azure Application Insights towards the end of last year, during a major project for an enterprise organisation. At the time, our principal requirement was to ensure that we could effectively track the usage patterns of a website, the pages visited, amount of time spent on each page etc. - all information types that, traditionally, you may turn to tools such as Google Analytics to generate. At the time, what swung the decision to go for Application Insights was the added value that the service provides from a developer standpoint, particularly if you are already heavily invested within .NET or Azure. The ability to view detailed information regarding errors on a web page, to automatically export this information out into Team Foundation Server/Team Services for further investigation and the very expendable and customisable manner in which data can be accessed or exported were all major benefits for our particular scenario. It\u0026rsquo;s a decision which I don\u0026rsquo;t think I would ever look back on and, even today, the product is still finding very neat and clever ways of surprising me 🙂\nOne of the other features that make Application Insights a more wholly expandable solution compared with Google Analytics is the ability to extend the amount of information that is scraped from your website page requests or visitors. These properties will then be viewable within Application Insights Analytics, as well as being exportable (as we will discover shortly). For example, if you are interested in determining the previous URL that a web user was on before visiting a page within a C# MVC website, create a new class file within your project called ReferrerTelemetryInitializer and add the following sample code:\nusing Microsoft.ApplicationInsights.Extensibility; using System; using System.Collections.Generic; using System.Linq; using System.Web; using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.DataContracts; namespace MyNamespace.MyProject.MyProjectName.MVC.Utilities { public class ReferrerTelemetryInitializer : ITelemetryInitializer { public void Initialize(ITelemetry telemetry) { if(telemetry is RequestTelemetry) { string referrer = HttpContext.Current.Request?.UrlReferrer?.ToString(); telemetry.Context.Properties[\u0026#34;Referrer\u0026#34;] = referrer; } } } } Be sure to add all required service references and rename your namespace value accordingly before deploying out. After this is done, the following query can then be executed within Application Insights Analytics to access the Referral information:\nrequests | extend referrer = tostring(customDimensions.Referrer) | project id, session_Id, user_Id, referrer The extend portion of the query is required because Application Insights groups all custom property fields together into a key/value array. If you are working with other custom property fields, then you would just replace the value after the customDimensions. portion of the query with the property name declared within your code.\nIf you have very short-term and simplistic requirements for monitoring your website performance data, then the above solution will likely be sufficient for your purposes. But maybe you require data to be stored beyond the default 90-day retention limit or you have a requirement to incorporate the information as part of an existing database or reporting application. This is where the Continuous Export feature becomes really handy, by allowing you to continually stream all data that passes through the service to an Azure Storage Account. From there, you can look at configuring a Stream Analytics Job to parse the data and pass it through to your required location. Microsoft very handily provides two guides on how to get this data into a SQL database and also into a Stream Analytics Dashboard within Power BI.\nWhat I like about Stream Analytics the most is that it utilises a SQL-like language when interacting with data. For recovering T-SQL addicts like myself, this can help overcome a major learning barrier when first getting to grips with the service. I would still highlight some caution, however, and recommend you have the online documentation for the Stream Analytics Query Language to hand, as there a few things that may catch you out. A good example of this is that whilst the language supports data type conversions via CAST operations, exactly like T-SQL, you are restricted to a limited list of data types as part of the conversion.\nSQL developers may also encounter a major barrier when working with custom property data derived from Application Insights. Given the nature of how these are stored, there is specific Stream Analytics syntax that has to be utilised to access individual property values. We\u0026rsquo;ll take a closer look now to see just how this is done, continuing our earlier example utilising the Referrer field.\nFirst of all, make sure you have configured an Input to your Request data from within Data Analytics. The settings should look similar to the image below if done correctly:\nThe full Path pattern will be the name of your Application Insights resource, a GUID value that can be found on the name of the Storage Account container utilised for Continuous Export, and then the path pattern that determines the folder name and the variables for date/time. It should look similar to this:\nmyapplicationinsightsaccount_16ef6b1f59944166bc37198a41c3fcf1/Requests/{date}/{time}\nWith your input configured correctly, you can now navigate to the Query tab and utilise the following query to return the same information that we accessed above within Application Insights:\nSELECT requestflat.ArrayValue.id AS RequestID, r.context.session.id AS SessionID, r.context.[user].anonId AS AnonymousID, GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 5), \u0026#39;Referrer\u0026#39;) AS ReferralURL, INTO [RequestsOutput] FROM [Requests] AS r CROSS APPLY GetElements(r.[request]) AS requestflat Now, its worth noting that the GetArrayElement function is reliant on a position value within the array to return the data correctly. For the example provided above, the position of the Referrer field is always the fifth key/value pair within the array. Therefore, it may be necessary to inspect the values within the context.custom.dimensions object to determine the position of your required field. In the above query, you could add the field r.context.custom.dimensions to your SELECT clause to facilitate any further interrogation.\nApplication Insights in of itself, arguably, provides feature-rich and expansive capabilities as a standalone web analytics tool. It certainly is a lot more \u0026ldquo;open\u0026rdquo; with respect to how you can access the data - a welcome design decision that is at the heart of a lot Microsoft products these days. When you start to combine Application Insights with Stream Analytics, a whole world of additional reporting capabilities and long-term analysis can be immediately unlocked for your web applications. Stream Analytics is also incredibly helpful for those who have a much more grounded background working with databases. Using the tool, you can quickly interact and convert data into the required format using a familiar language. It would be good to perhaps see, as part of some of the examples available online, tutorials on how to straightforwardly access Custom Dimensions properties, so as to make this task simpler to achieve. But this in and of itself does not detract away from how impressive I think both tools are - both as standalone products and combined together.\n","date":"2018-03-11T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/accessing-custom-dimension-fields-from-application-insights-via-stream-analytics/","title":"Accessing Custom Dimension Fields from Application Insights via Stream Analytics"},{"content":"This week\u0026rsquo;s blog post is sponsored by ActiveCrypt Software.\nEncryption appears to be a topic of near constant discussion at the moment, spearheaded primarily by the impending deadline of the General Data Protection Regulations (GDPR). These are, in essence, a new set of data protection rules that will apply to all organisations operating within the European Economic Area (EEA). A key aspect of them concerns implementing appropriate technical controls over sensitive data categories, to mitigate against any damage resulting from a data breach. Now, the key thing to highlight around this is the \u0026ldquo;proportionality\u0026rdquo; aspect; i.e. any technical controls implemented should be reasonably expected, based on the size of the organisation in question and the nature of their data processing/controlling activity. You should, therefore, be carefully evaluating your organisation to identify whether the lack of encryption could result in damage to a data subject.\nI\u0026rsquo;ve had a look previously at database encryption in the context of Dynamics 365 Customer Engagement. What is nice about the application, and nearly all of Microsoft\u0026rsquo;s Software as a Service (SaaS) products at the moment, is that GDPR is very much at the centre of each individual offering. I have been genuinely impressed to see the level of effort Microsoft has been devoting to GDPR and in ensuring their SaaS product lines are compliant with the regulations - often without the need for charging customers an arm and a leg in the process. The same can perhaps not be said for any on-premise equivalent of a particular SaaS product. This is, to be fair, expected - Microsoft has been incredibly vocal about adopting a \u0026ldquo;cloud first\u0026rdquo; strategy in all things. But for organisations who do find themselves having to support on-premise applications or database systems, the journey towards implementing the required technical solutions for encryption could be rocky.\nCase in point - SQL Server has long provided the capability to implement Transparent Database Encryption (TDE), which satisfies the requirement for at rest encryption without the need to redevelop applications from the ground up. Setting up Transparent Database Encryption can be an onerous process (more on this in a second), and requires the involvement of manual scripting. The following script outlines all the steps involved:\n--First, a Master Key should be created on the Server instance USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = \u0026#39;mymasterkey\u0026#39;; GO --Next, a Certificate for the Server should be created. CREATE CERTIFICATE MyCert WITH SUBJECT = \u0026#39;DEK Certificate for testing purposes\u0026#39;; GO --This then allows for a Database Encryption Key to be created for encrypting a database. This needs to be created for --EVERY database that requires encryption USE EncryptionTest; GO CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE MyCert; GO --Once created, Encryption can then be enabled/disabled using the snippets below ALTER DATABASE MyTestDatabase SET ENCRYPTION ON; GO ALTER DATABASE MyTestDatabase SET ENCRYPTION OFF; GO --The Server Certificate should be backed up for disaster recovery scenarios or to enable databases to be restored to --other SQL Server instances. First, backup the certificate with an encrypted private key... USE master; GO BACKUP CERTIFICATE MyCert TO FILE = \u0026#39;C:\\MyCert.cer\u0026#39; WITH PRIVATE KEY ( FILE = \u0026#39;C:\\MyCert.pvk\u0026#39;, ENCRYPTION BY PASSWORD = \u0026#39;mypassword\u0026#39;); GO --Once saved, execute the following code on the target instance to restore the certificate... CREATE CERTIFICATE MyCert FROM FILE =\u0026#39;C:\\MyCert.cer\u0026#39; WITH PRIVATE KEY(FILE=\u0026#39;C:\\MyCert.pvk\u0026#39;, DECRYPTION BY PASSWORD=\u0026#39;mypassword\u0026#39;); Whilst TDE is a neat solution, it does have some issues:\nIt\u0026rsquo;s important to keep in mind any potential disaster recovery scenario, when working with TDE, by backing up the server certificate to a separate physical location. The above script provides the necessary snippet to accomplish this, so it is imperative that this is done for every certificate you plan to work with. All required configuration steps have to be accomplished via scripting and the feature is not enabled by default, unlike Azure SQL Databases. Depending on your level of expertise when working with SQL Server, you may have to leverage assistance from other sources to get up and running with the feature. Perhaps the biggest barrier to adopting TDE is the version restrictions. It is only made available as part of the Developer and Enterprise editions of SQL Server. As the name suggests, the Developer edition is licensed strictly for non-Production environments and the Enterprise edition has a staggering cost, licensed based on the number of cores the target server is running. To put this into better context, I was recently quoted a whopping £68,000 through Microsoft Volume Licensing! For most organisations, this can result in an incredibly high cost of ownership just to satisfy a single requirement. Fortunately, for those who are wanting to implement database encryption via an accessible interface, there are a number of products available on the market to assist. The best one I have come across is DbDefence from ActiveCrypt, which offers a simple to use and efficient means of configuring encryption for your databases. In fact, depending on your database size, you can more than likely have your databases encrypted in less than 5 minutes 🙂 Let\u0026rsquo;s take a closer look at how straightforward the software is to use by encrypting a database from scratch:\nAfter downloading the installation package, you will need to run it on the server where your SQL Server instance resides. During the installation process, the Full installation option can be selected and you will also need to specify the SQL Server instance that you wish to utilise with the software: After the installation completes successfully, launch the application and then connect to your target SQL Server instance. Next, select the database that you want to encrypt. You should see a window similar to the below if done correctly: At this point, you could choose to accept the default Encryption and Protection options and proceed to the next step. However, I would recommend changing the options as follows: Modify the AES Encryption Options value to 256-bit. Whilst the risk of a successful brute force between 128 and 256 bit is effectively zero, 256 still supports longer keys and is, therefore, more secure. In most cases, you just need to ensure data is encrypted at rest and not provide any additional access restrictions beyond this. In these situations, I would recommend setting the required level of protection to Only Encryption. Maximum Transparency. This negates the need for any additional configuration after encryption to ensure your client applications still work successfully. To encrypt the database, a password/key is required. You should always ensure you utilise a random, sequential password that contains upper/lower case letters, numbers and symbols. I would also recommend having a seperate password for each database you encrypt and to ensure that these are all stored seperately (as they may be required to decrypt the databases at a later date). The length of the password to use will depend on the AES encryption mode, but if you are using 256 bit, then an 18 character password is recommended. When you are ready to start the encryption process, press the Encrypt button and confirm the warning box that appears: Give it a few minutes and you will then be able to see in the main window that your database has been encrypted successfully:\nIf you ever have the requirement to decrypt the database, then you can return to the application at any time, connect up to the database, enter the password and then press Decrypt:\nAs a final step, you can then test that your database files have been encrypted successfully by attempting to mount the encrypted database files onto a seperate SQL Server instance. You should get an error message similar to the below, indicating that your database has been encrypted successfully: Conclusions or Wot I Think\nThe world of encryption can be a veritable nightmare to those approaching for the first time, and GDPR can be blamed - but also, I would argue, welcomed - in raising the profile of the topic recently. As with a lot of things concerning GDPR, there is a real opportunity for organisations to get a handle on the personal data they work with every day and to implement the required processes and systems to ensure the right thing is being done when handling sensitive data. Database encryption is one weapon in your arsenel when it comes to satisfying a number of areas within GDPR; but, as we have seen, the total cost of ownership and technical expertise required to implement such a solution could - regrettably - force many to simply look the other way when it comes to securing their databases. DbDefence assists greatly in both these regards - by significantly reducing cost and providing a simplified, easy to use interface, to deploy database encryption within minutes. What\u0026rsquo;s great as well is that, as part of evaluating the software, I found the support team at ActiveCrypt incredibly reactive and helpful in dealing with the queries I had around the product. If you are looking for a cheaper, yet wholly effective, solution to implement database encryption for SQL Server, then I would not hesitate to recommend the DbDefence product.\n","date":"2018-03-04T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/evaluating-database-encryption-options-for-sql-server/","title":"Evaluating Database Encryption Options for SQL Server"},{"content":"When working with web applications and Azure App Service, it may sometimes be necessary to delete or remove files from a website. Whether it is a deprecated feature or a bit of development \u0026ldquo;junk\u0026rdquo; that was accidentally left on your website, these files can often introduce processing overhead or even security vulnerabilities if left unattended. It is, therefore, good practice to ensure that these are removed during a deployment. Fortunately, this is where tools such as Web Deploy really come into their element. When using this from within Visual Studio via the Publish dialog box, you can specify to remove any file that does not exist within your Project via the File Publish Options section on the Settings tab:\nThere is also the Exclude files from the App_Data folder setting, which has a bearing on how the above operates, but we\u0026rsquo;ll come back to that shortly\u0026hellip;\nWhilst this feature is useful if you deploying out to a dev/test environment manually from Visual Studio, it is less so if you have implemented an automated release strategy via a tool such as Visual Studio Team Services (the cloud version of Team Foundation Services). This is where all steps as part of a deployment are programmatically defined and then re-used whenever a new release to an environment needs to be performed. Typically, this may involve some coding as part of a PowerShell script or similar, but what makes Team Services really effective is the ability to \u0026ldquo;drag and drop\u0026rdquo; common deployment actions that cover most release scenarios. Case in point - a specific task is provided out of the box to handle deployments to Azure App Service:\nWhat\u0026rsquo;s even better is the fact that we have the same option at our disposal à la Visual Studio - although you would be forgiven for overlooking it given how neatly the settings are tucked away 🙂\nNote that you have to specifically enable the option to Publish using Web Deploy for the Remove additional files at destination option to appear. It\u0026rsquo;s important, therefore, that you fully understand how Web Deploy works in comparison to other options, such as FTP deploy. I will think you will find, though, that the list of benefits far outweighs any negatives. In fact, the only drawback of using this option is that you must be using Microsoft \u0026ldquo;approved\u0026rdquo; tools, such as Visual Studio, to facilitate.\nWe saw earlier in this post the option for Exclude files from the App_Data folder setting. Typically, you may be using this folder as some form of local file store for configuration data or similar. It is also the location where any WebJobs configured for your website would be stored. With the Exclude files from the App_Data folder setting enabled, you may assume that Web Deploy will indiscriminately delete all files residing within the App_Data directory. Luckily, the automated task instead throws an error if it detects any files within the directory that may be affected by the delete operation:\nHelpful in the sense that the task is not deleting files which could be potentially important, frustrating in that the deployment will not complete successfully! As you may have already guessed, enabling the Exclude files from the App_Data folder setting in Visual Studio/on the Team Services task gets around this issue:\nYou can then sit back and verify as part of the Team Services Logs that any file not in your source project is deleted successfully during deployment:\nManual deployments to Production systems can be fraught with countless hidden dangers - the potential for an accidental action chief among them, but also the risk of outdated components of a system not being flagged up and removed as part of a release cycle. Automating deployments go a long way in taking human error out of this equation and, with the inclusion of this handy feature to remove files not within your source code during the deployment, also negates the need for any manual intervention after a deployment to rectify any potential issues. If you are still toying with introducing fully automated deployments within your environment, then I would urge wholeheartedly to commit the effort towards achieving this outcome. Get in touch if you need any help with this, and I would be happy to lend some assistance 🙂\n","date":"2018-02-25T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/removing-deleted-files-during-visual-studio-team-services-azure-app-service-deploy-task/","title":"Removing Deleted Files during Visual Studio Team Services Azure App Service Deploy Task"},{"content":"Slight change of pace with this week\u0026rsquo;s blog post, which will be a fairly condensed and self-indulgent affair - due to personal circumstances, I have been waylaid somewhat when it comes to producing content for the blog and I have also been unable to make any further progress with my new YouTube video series. Hoping that normal service will resume shortly, meaning additional videos and more content-rich blog posts, so stay tuned.\nI\u0026rsquo;ve been running the CRM Chap blog for just over 2 years now. Over this time, I have been humbled and proud to have received numerous visitors to the site, some of whom have been kind enough to provide feedback or to share some of their Dynamics CRM/365 predicaments with me. Having reached such a landmark now seems to be good a time as any to take a look back on the posts that have received the most attention and to, potentially, give those who missed them the opportunity to read them. In descending order, here is the list of the most viewed posts to date on the crmchap.co.uk website:\nUtilising SQL Server Stored Procedures with Power BI Installing Dynamics CRM 2016 SP1 On-Premise Power BI Deep Dive: Using the Web API to Query Dynamics CRM/365 for Enterprise Utilising Pre/Post Entity Images in a Dynamics CRM Plugin Modifying System/Custom Views FetchXML Query in Dynamics CRM Grant Send on Behalf Permissions for Shared Mailbox (Exchange Online) Getting Started with Portal Theming (ADXStudio/CRM Portals) Microsoft Dynamics 365 Data Export Service Review What\u0026rsquo;s New in the Dynamics 365 Developer Toolkit Implementing Tracing in your CRM Plug-ins I suppose it is a testament to the blog\u0026rsquo;s stated purpose that posts covering areas not exclusive to Dynamics CRM/365 rank so highly on the list and, indeed, represents how this application is so deeply intertwined with other technology areas within the Microsoft \u0026ldquo;stack\u0026rdquo;.\nTo all new and long-standing followers of the blog, thank you for your continued support and appreciation for the content 🙂\n","date":"2018-02-18T00:00:00Z","image":"/images/JJG-Caricature-250px.jpg","permalink":"/top-10-most-viewed-crm-chap-blog-posts/","title":"Top 10 Most Viewed CRM Chap Blog Posts"},{"content":"As part of developing Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) plug-ins day in, day out, you can often forget about the Execution Mode setting. This can be evidenced by the fact that I make no mention of it in my recent tutorial video on plug-in development. In a nutshell, this setting enables you to customise whether your plug-in executes in Synchronous or Asynchronous mode. Now, you may be asking - just what the hell does that mean?!? The best way of understanding is by rephrasing the terminology; it basically tells the system when you want your code to be executed. Synchronous plug-ins execute all of your business logic whilst the record is being saved by the user, with this action not being considered complete and committed to the backend database until the plug-in completes. By comparison, Asynchronous plug-ins are queued for execution after the record has been saved. A System Job record is created and queued alongside other jobs in the system via the Asynchronous Service. Another way of remembering the difference between each one is to think back to the options available to you as part of a Workflow. They can either be executed in real time (synchronously) or in the background (asynchronously). Plug-ins are no different and give you the flexibility to ensure your business logic is applied immediately or, if especially complex, queued so that the system has sufficient time to process in the background.\nI came across a strange issue with an arguably even stranger Synchronous plug-in the other day, which started failing after taking an inordinately long time saving the record:\nUnexpected exception from plug-in (Execute): MyPlugin.MyPluginClass: System.AggregateException: One or more errors occurred.\nThe \u0026ldquo;strange\u0026rdquo; plug-in was designed so that, on the Create action of an Entity record, it goes out and creates various related records within the application, based on a set of conditions. We originally had issues with the plug-in a few months back erroring, due to the execution time exceeding the 2 minute limit for sandbox plug-ins. A rather ingenious and much more accomplished developer colleague got around the issue by implementing a degree of asynchronous processing within the plug-in, achieved like so:\nawait Task.Factory.StartNew(() =\u0026gt; { lock (service) { Stopwatch stopwatch = Stopwatch.StartNew(); Guid record = service.Create(newRecord); tracing.Trace(\u0026#34;Record with ID \u0026#34; + record.ToString() + \u0026#34; created successfully after: {0}ms.\u0026#34;, stopwatch.ElapsedMilliseconds); } }); I still don\u0026rsquo;t fully understand just exactly what this is doing, but I put this down to my novice level C# knowledge 🙂 The important thing was that the code worked\u0026hellip;until some additional processing was added to the plug-in, leading to the error message above.\nAt this juncture, our only choice was to look at forcing the plug-in to execute in Asynchronous mode by modifying the appropriate setting on the plug-in step within the Plugin Registration Tool:\nAfter making this change and attempting to create the record again in the application, everything worked as expected. However, this did create a new problem for us to overcome - end users of the application were previously used to seeing the related records created by the plug-in within sub-grids on the Primary Entity form, which would then be accessed and worked through accordingly. As the very act of creating these records now took place within the background and took some time to complete, we needed to display an informative message to the user to advise them to refresh the form after a few minutes. You do have the ability within plug-ins to display a custom message back to the user, but this is only in situations where you are throwing an error message and it didn\u0026rsquo;t seem to be a particularly nice solution for this scenario.\nIn the end, the best way of achieving this requirement was to implement a JScript function on the form. This would trigger whenever the form is saved and displays a message box that the user has to click OK on before the save action is carried out:\nfunction displaySaveMessage(context) { var eventArgs = context.getEventArgs(); var saveMode = eventArgs.getSaveMode(); if (saveMode == 70 || saveMode == 2 || saveMode == 1 || saveMode == 59) { var message = \u0026#34;Records will be populated in the background and you will need to refresh the form after a few minutes to see them on the Sub-Grid. Press OK to save the record.\u0026#34; Xrm.Utility.alertDialog(message, function () { Xrm.Page.data.save().then(function () { Xrm.Page.data.refresh(); }) }); } } By feeding through the execution context parameter, you are able to determine the type of save action that the alert will trigger on; in this case, Save, Save \u0026amp; Close, Save \u0026amp; New and Autosave. Just make sure you configure your script with the correct properties on the form, which are:\nUsing the OnSave event handler With the Pass execution context as first parameter setting enabled From the end-users perspective, they will see something similar to the below when the record is saved:\nIt\u0026rsquo;s a pity that we don\u0026rsquo;t have similar kind of functionality exposed via Business Rules that enable us to display OnSave alerts that are more in keeping with the applications look and feel. Nevertheless, the versatility of utilising JScript functions should be evident here and can often achieve these types of bespoke actions with a few lines of code.\nWhen it comes to plug-in development, understanding the impact and processing time that your code has within the application is important for two reasons - first, in ensuring that end users are not frustrated by long loading times and, secondly, in informing the choice of Execution Mode when it comes to deploying out a plug-in. Whilst Asynchronous plug-ins can help to mitigate any user woes and present a natural choice when working with bulk operations within the application, make sure you fully understand the impact that these have on the Asynchronous Service and avoid a scenario where the System Job entity is queued with more jobs then it can handle.\n","date":"2018-02-11T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/system-aggregateexception-one-or-more-errors-occured-dynamics-crm-dynamics-365-customer-engagement-plug-in/","title":"System.AggregateException: One or more errors occured (Dynamics CRM/Dynamics 365 Customer Engagement Plug-in)"},{"content":"If you are looking at automating the execution of SQL Server Integration Services .dtsx packages, then there are a few options at your disposal. The recommended and most streamlined route is to utilise the SSIDB catalog and deploy your packages to the catalog from within Visual Studio. This gives you additional flexibility if, when working with SQL Server 2016 or greater, on whether to deploy out single or multiple packages together. An alternative approach is to deploy your packages to the file system and then configure an Agent Job on SQL Server to execute the job based on a schedule and with runtime settings specified. This is as simple as selecting the appropriate dropdown option on the Agent Job Step configuration screen and setting the Package source value to File system:\nDeploying out in this manner is useful if you are restricted from setting up the SSISDB catalog on your target SQL Server instance or if you are attempting to deploy packages to a separate Active Directory domain (I have encountered this problem previously, much to my chagrin). You also have the benefit of utilising other features available via the SQL Server Agent, such as e-mail notifications on fail/success of a job or conditional processing logic for job step(s). The in-built scheduling tool is also pretty powerful, enabling you to fine tune your automated package execution to any itinerary you could possibly conjure up.\nI encountered a strange issue recently with a DTSX package configured via the SQL Agent. Quite randomly, the package suddenly started failing each time it was scheduled for execution, with the following error generated in the log files:\nFailed to decrypt an encrypted XML node because the password was not specified or not correct. Package load will attempt to continue without the encrypted information.\nThe issue was a bit of a head-scratcher, with myself and a colleague trying the following steps in an attempt to fix the issue:\nForcing the package to execute manually generated the same error - this one was a bit of a longshot but worth trying anyway 🙂 When executing the package from within Visual Studio, no error was encountered and the package executed successfully. After replacing the package on the target server with the package just executed on Visual Studio (same version) and manually executing it, once again the same error was thrown. In the end, the issue was resolved by deleting the Agent Job and creating it again from scratch. Now, if you are diagnosing the same issue and are looking to perform these same steps, it may be best to use the Script Job as option within SQL Server Management Studio to save yourself from any potential headache when re-building your Job\u0026rsquo;s profile:\nThen, for good measure, perform a test execution of the Job via the Start Job at Step\u0026hellip; option to verify everything works.\nI am still stumped at just what exactly went wrong here, but it is good to know that an adapted version of the ancient IT advice of yore can be referred back to\u0026hellip;\n","date":"2018-02-04T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/resolving-failed-to-decrypt-an-encrypted-xml-node-because-the-password-was-not-specified-or-not-correct-error-ssis/","title":"Resolving \"Failed to decrypt an encrypted XML node because the password was not specified or not correct\" Error (SSIS)"},{"content":"This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Plug-in, the second in a series aiming to provide tutorials on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:\nBelow you will find links to access some of the resources discussed as part of the video and to further reading topics:\nFull Code Sample [snippet id=\u0026ldquo;390\u0026rdquo;]\nDownload/Resource Links Visual Studio 2017 Community Edition\nSetup a free 30 day trial of Dynamics 365 Customer Engagement\nC# Guide (Microsoft Docs)\nSource Code Management Solutions\nVisual Studio Team Services - Free for up to 5 users and my recommended choice when working with Dynamics 365 Customer Engagement BitBucket GitHub Further Reading MSDN - Plug-in development\nMSDN - Supported messages and entities for plug-ins\nMSDN - Sample: Create a basic plug-in\nMSDN - Debug a plug-in\nI\u0026rsquo;ve written a number of blog posts around plug-ins previously, so here\u0026rsquo;s the obligatory plug section 🙂 :\nWhy CRM Developers Should Use Business Rules More - The post talks more about Business Rules in the context of JScript, but echoes some of the points made in the video in respect to considering plug-ins as a \u0026ldquo;last resort\u0026rdquo; solution. What is Unsecure/Secure Configuration on a Dynamics CRM/365 for Enterprise Plugin? - Unsecure/Secure Configurations are, arguably, one of those features that people tend to forget about with plug-ins; this post walks through how to implement them within your code. Utilising Pre/Post Entity Images in a Dynamics CRM Plugin - One of the best features at your disposal with plug-ins, Entity Images allow you to take snapshots of record data before and after your plug-in executes, and then access this data during runtime. What\u0026rsquo;s the best way of learning CRM Development? - In this post, I talk about some of the things that you can do to help speed your journey towards Dynamics 365 Customer Engagement developer extraordinaire 🙂 Interested in learning more about JScript Form function development in Dynamics 365 Customer Engagement? Then check out my previous post for my video and notes on the subject. I hope you find these videos useful and do let me know if you have any comments or suggestions for future video content.\n","date":"2018-01-28T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-customer-engagement-deep-dive-creating-a-basic-plug-in/","title":"Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Plug-in"},{"content":"When working with Virtual Machines (VM\u0026rsquo;s) on Azure that have been deployed using a Linux Operating System (Ubuntu, Debian, Red Hat etc.), you would be forgiven for assuming that the experience would be rocky when compared with working with Windows VM\u0026rsquo;s. Far from it - you can expect an equivalent level of support for features such as full disk encryption, backup and administrator credentials management directly within the Azure portal, to name but a few. Granted, there may be some difference in the availability of certain extensions when compared with a Windows VM, but the experience on balance is comparable and makes the management of Linux VM\u0026rsquo;s a less onerous journey if your background is firmly rooted with Windows operating systems.\nTo ensure that the Azure platform can effectively do some of the tasks listed above, the Azure Linux Agent is deployed to all newly created VM\u0026rsquo;s. Written in Python and supporting a wide variety of common Linux distributable OS\u0026rsquo;s, I would recommend never removing this from your Linux VM; no matter how tempting this may be. The tradeoff could cause potentially debilitating consequences within Production environments. A good example of this would be if you ever needed to add an additional disk to your VM. This task would become impossible to achieve without the Agent present on your VM. As well as having the proper reverence for the Agent, it\u0026rsquo;s important to keep an eye on how the service is performing, even more so if you are using Recovery Services vaults to take snapshots of your VM and back up to another location. Otherwise, you may start to see errors like the one below being generated whenever a backup job attempts to complete:\nIt\u0026rsquo;s highly possible that many administrators are seeing this error at the time of writing this post (January 2018). The recent disclosures around Intel CPU vulnerabilities have prompted many cloud vendors, including Microsoft, to roll out emergency patches across their entire data centres to address the underlying vulnerability. Whilst it is commendable that cloud vendors have acted promptly to address this flaw, the pace of the work and the dizzying array of resources affected has doubtless led to some issues that could not have been foreseen from the outset. I believe, based on some of the available evidence (more on this later), that one of the unintended consequences of this work is the rise of the above error message with the Azure Linux Agent.\nWhen attempting to deal with this error myself, there were a few different steps I had to try before the backup started working correctly. I have collected all of these below and, if you find yourself in the same boat, one or all of them should resolve the issue for you.\nFirst, ensure that you are on the latest version of the Agent This one probably goes without saying as it\u0026rsquo;s generally the most common answer to any error. 😁 However, the steps for deploying updates out onto a Linux machine can be more complex, particularly if your primary method of accessing the machine is via an SSH terminal. The process for updating your Agent depends on your Linux version - this article provides instructions for the most common variants. In addition, it is highly recommended that the auto-update feature is enabled, as described in the article.\nVerify that the Agent service starts successfully. It\u0026rsquo;s possible that the service is not running on your Linux VM. This can be confirmed by executing the following command:\nservice walinuxagent start Be sure to have your administrator credentials handy, as you will need to authenticate to successfully start the service. You can then check that the service is running by executing the ps -e command.\nClear out the Agent cache files The Agent stores a lot of XML cache files within the /var/lib/waagent/ folder, which can sometimes cause issues with the Agent. Microsoft has specifically recommended that the following command is executed on your machine after the 4th January if you are experiencing issues:\nsudo rm -f /var/lib/waagent/*.[0-9]*.xml The command will delete all \u0026ldquo;old\u0026rdquo; XML files within the above folder. A restart of the service should not be required. The date mentioned above links back to the theory suggested earlier in this post that the Intel chipset issues and this error message are linked in some way, as the dates seem to tie with when news first broke regarding the vulnerability.\nIf you are still having problems\u0026hellip; Read through the entirety of this article, and try all of the steps suggested - including digging around in the log files, if required. If all else fails, open a support request directly with Microsoft.\nHopefully, by following the above steps, your backups are now working again without issue 🙂\n","date":"2018-01-21T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/resolving-vm-agent-is-unable-to-communicate-with-the-azure-backup-service-error-linux/","title":"Resolving 'VM Agent is unable to communicate with the Azure Backup Service' Error (Linux)"},{"content":"This is an accompanying blog post to my YouTube video Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function, the first in a series that aims to provide tutorials on how to accomplish developer focused tasks within Dynamics 365 Customer Engagement. You can watch the video in full below:\nBelow you will find links to access some of the resources discussed as part of the video and to further reading topics.\nFull Code Sample [snippet id=\u0026ldquo;381\u0026rdquo;]\nDownload/Resource Links Visual Studio 2017 Community Edition\nSetup a free 30 day trial of Dynamics 365 Customer Engagement\nW3 Schools JavaScript Tutorials\nSource Code Management Solutions\nVisual Studio Team Services - Free for up to 5 users and my recommended choice when working with Dynamics 365 Customer Engagement BitBucket GitHub Further Reading MSDN - Use JavaScript with Microsoft Dynamics 365\nMSDN - Use the Xrm.Page. object model\nMSDN - Xrm.Page.ui control object\nMSDN - Overview of Web Resources\nDebugging custom JavaScript code in CRM using browser developer tools (steps are for Dynamics CRM 2016, but still apply for Dynamics 365 Customer Engagement)\nHave any thoughts or comments on the video? I would love to hear from you! I\u0026rsquo;m also keen to hear any ideas for future video content as well. Let me know by leaving a comment below or in the video above.\n","date":"2018-01-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-customer-engagement-deep-dive-creating-a-basic-jscript-form-function/","title":"Dynamics 365 Customer Engagement Deep Dive: Creating a Basic Jscript Form Function"},{"content":"With 2018 now very firmly upon us, it\u0026rsquo;s time again to see what\u0026rsquo;s new in the world of Dynamics 365 certification. Nothing much has changed this time around, but there are a few noteworthy updates to be aware if you are keen to keep your certifications as up to date as possible. Here\u0026rsquo;s my round-up of what\u0026rsquo;s new - let me know in the comments if you think I have missed anything out.\nMCSE Business Applications 2018 The introduction of a dedicated Microsoft Certified Solutions Architect (MCSA) and Microsoft Certified Business Applications (MCSE) certification track for Dynamics 365 was a positive step in highlighting the importance of Dynamics 365 alongside other core Microsoft products. Under the new system, those wishing to maintain a \u0026ldquo;good standing\u0026rdquo; MCSE will need to re-certify each year with a brand new exam to keep their certification current for the year ahead. Those who obtained their MCSE last year will now notice on their certificate planner the opportunity to attain the 2018 version of the competency via the passing of a single exam. For Dynamics 365 Customer Engagement focused professionals, assuming you only passed either the Sales or Customer Service exam last year, passing the other exam should be all that is required to recertify - unless you fancy your chances trying some of the new exams described below.\nNew MCSE Exams Regardless of what boat you are relating to the Business Applications MCSE, those looking to obtain to 2018 variant of the certification can expect to see two additional exams available that will count towards the necessary award requirements:\nMB6-897 - Microsoft Dynamics 365 for Retail MB2-877 - Microsoft Dynamics 365 for Field Service The exams above currently only appear on the US Microsoft Learning website but expect them to appear globally within the next few weeks/months.\nMB2-877 represents an interesting landmark in Microsoft\u0026rsquo;s journey towards integrating FieldOne as part of Dynamics CRM/Dynamics 365 Customer Engagement, with it arguably indicating the ultimate fruition of this journey. To pass the exam, you are going to have to have a good grasp of the various entities involved as part of the field service app, as well as a thorough understanding of the mobile application itself. As is typically the case when it comes to Dynamics 365 certification, the Dynamics Learning Portal (DLP) is going to be your destination for preparatory learning resources for the exam; along with a good play around with the application itself within a testing environment. If you have access to the DLP, it is highly recommended you complete the following courses at your own pace before attempting the exam:\n81197AE: Introduction to Microsoft Dynamics 365 for Field Service 81218AE: Mobile and Dispatch in Microsoft Dynamics 365 for Field Service 81219AE: Setup and Configuration in Microsoft Dynamics 365 for Field Service Dynamics 365 for Retail is a fairly new addition to the \u0026ldquo;family\u0026rdquo; and one which - admittedly - I have very little experience with. It\u0026rsquo;s rather speedy promotion to exam level status I, therefore, find somewhat surprising. This is emphasised further by the fact that there are no dedicated exams for other, similar applications to Field Service, such as Portals. Similar to MB2-877, preparation for MB6-897 will need to be directed primarily through DLP, with the following recommended courses for preparation:\n81208AE: Channel Management and Corporate Operations in Microsoft Dynamics 365 for Retail 81209AE: Merchandising and Inventory Management in Microsoft Dynamics 365 for Retail 81210AE: Point of Sale in Microsoft Dynamics 365 for Retail 81211AE: Call Centers in Microsoft Dynamics 365 for Retail Exam Preparation Tips I\u0026rsquo;ve done several blog posts in the past where I discuss Dynamics CRM/Dynamics 365 exams, offering help to those who may be looking to achieve a passing grade. Rather than repeat myself (or risk breaking any non-disclosure agreements), I\u0026rsquo;d invite you to cast your eyes over these and (I hope!) that they prove useful in some way as part of your preparation:\nEvaluating the new Dynamics 365 for Enterprise MCSA and MCSE What\u0026rsquo;s New in the Dynamics 365 for Enterprise Specialist Exams Dynamics CRM 2016 Exams If you have got an exam scheduled in, then good luck and make sure you study! 🙂\n","date":"2018-01-07T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-exam-updates-2018/","title":"Dynamics 365 Exam Updates (2018)"},{"content":"Did you know that you can write Plug-ins for Dynamics 365 Customer Engagement/Dynamics CRM (D365CE/CRM) using Visual Basic .NET (VB.NET)? You wouldn\u0026rsquo;t have thought so after a thorough look through the D365CE/CRM Software Development Kit (SDK). Whilst there is a plethora of code examples available for C# plug-ins, no examples are provided on how to write a basic plug-in for the application using VB.NET. This is to be expected, perhaps due to the common status that the language has when compared with C#. Whilst VB.NET knowledge is a great asset to have when extending Office applications via Visual Basic for Applications, you would struggle to find many at scale application systems that are written using VB.NET. C# is pretty much the de facto language that you need to utilise when developing in .NET, and the commonly held view is that exclusive VB.NET experience is a detriment as opposed to an asset. With this in mind, it is somewhat understandable why the SDK does not have any in-depth VB.NET code examples.\nAccepting the above, it is likely however that many long-standing developers will have knowledge of the BASIC language, thereby making VB.NET a natural choice when attempting to extend D365CE/CRM. I do not have extensive experience using the language, but I was curious to see how difficult it would be to implement a plug-in using it - and to hopefully provide assistance to any lonely travellers out there who want to put their VB.NET expertise to the test. The best way to demonstrate this is to take an existing plug-in developed in C# and reverse engineer the code into VB.NET. We took a look at a fully implemented plug-in previously on the blog, that can be used to retrieve the name of the User who has created a Lead record. The entire class file for this is reproduced below:\nusing System; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; namespace D365.BlogDemoAssets.Plugins { public class PostLeadCreate_GetInitiatingUserExample : IPlugin { public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); // Obtain the organization service reference. IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); // The InputParameters collection contains all the data passed in the message request. if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) \u0026amp;\u0026amp; context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Entity lead = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; //Use the Context to obtain the Guid of the user who triggered the plugin - this is the only piece of information exposed. Guid user = context.InitiatingUserId; //Then, use GetUserDisplayCustom method to retrieve the fullname attribute value for the record. string displayName = GetUserDisplayName(user, service); //Build out the note record with the required field values: Title, Regarding and Description field Entity note = new Entity(\u0026#34;annotation\u0026#34;); note[\u0026#34;subject\u0026#34;] = \u0026#34;Test Note\u0026#34;; note[\u0026#34;objectid\u0026#34;] = new EntityReference(\u0026#34;lead\u0026#34;, lead.Id); note[\u0026#34;notetext\u0026#34;] = @\u0026#34;This is a test note populated with the name of the user who triggered the Post Create plugin on the Lead entity:\u0026#34; + Environment.NewLine + Environment.NewLine + \u0026#34;Executing User: \u0026#34; + displayName; //Finally, create the record using the IOrganizationService reference service.Create(note); } } } } The code above encapsulates a number of common operations that a plug-in can seek to accomplish - updating a record, obtaining context specific values and performing a Retrieve operation against an entity - thereby making it a good example for what will follow in this post.\nWith everything ready to go, it\u0026rsquo;s time for less talking and more coding 🙂 We\u0026rsquo;ll build out a VB.NET version of the above class file, covering some of the \u0026ldquo;gotchas\u0026rdquo; to expect on each step, before then bringing all of the code together in a finished state.\nImporting References As you would expect within C#, a class file requires references to the D365CE SDK DLL files. These should be imported into your project and then added to your class file using the Imports statement:\nWith these two lines of code, there are immediately two things which you may need to untrain yourself from doing if you have come from a C# background:\nMake sure not to add your semi-colons at the end of each line, as it is not required for VB.NET You may be tempted to use the Return key to auto-complete your syntax, which works fine in a C# project\u0026hellip;but will instead skip you down to the next line in VB.NET. Instead, use the Tab key to autocomplete any IntelliSense prompts. Adding a Namespace By default, a VB.NET class project does not implement a Namespace for your class. This will need to be added next, underneath the project references like so:\nImplementing the IPlugin Interface So far so good\u0026hellip;and things continue in the same vein when implementing the IPlugin interface. This is configured like so:\nThe only thing to remember here, if you are still in C# mode, is that your colon is replaced with the Implements statement and that this part of the code needs to be moved to the next line.\nPutting together the Execute Method The Execute method is the heart and soul of any plug-in, as this contains the code that will execute when the plug-in is triggered. In C#, this is implemented using a void method (i.e. a block of code that does not return a specific value, object etc.). It\u0026rsquo;s equivalent within VB.Net is a Sub - short for \u0026ldquo;Subroutine\u0026rdquo; - which needs to be additionally peppered with an Implements statement to the IPlugin.Execute sub:\nImplementing Variables Here\u0026rsquo;s where things start to get different. Variables within C# are generally implemented using the following syntax:\n\u0026lt;Type\u0026gt; \u0026lt;Variable Name\u0026gt; = \u0026lt;Variable Value\u0026gt;;\nSo to declare a string object called text, the following code should be used:\nstring text = \u0026#34;This is my string text\u0026#34;; Variables in VB.NET, by contrast, are always declared as Dim\u0026rsquo;s, with the name and then the Type of the variable declared afterwards. Finally, a value can then be (optionally) provided. A complete example of this can be seen below in the implementing of the IPluginExecutionContext interface:\nIn this above example, we also see two additional differences that C# developers have to reconcile themselves with:\nThere is no need to specifically cast the value as an IPluginExecutionContext object - a definite improvement over C# 😁 Rather than using the typeof operator when obtaining the service, the VB.NET equivalent GetType should be used instead. The process of creating variables can seem somewhat labourious when compared with C#, and there are a few other things to bear in mind with variables and this plug-in specifically. These will be covered shortly.\nThe If\u0026hellip;Then Statement Pretty much every programming language has an implementation of an If\u0026hellip;Else construct to perform decisions based on conditions (answers in the comments if you have found a language that doesn\u0026rsquo;t!). VB.NET is no different, and we can see how this is implemented in the next bit of the plug-in code:\nCompared with C#, you have to specifically remember to add a Then statement after your conditional test and also to include an End If at the end of your code block. It\u0026rsquo;s also important to highlight the use of different operators as well - in this case, And should be used as opposed to \u0026amp;\u0026amp;.\nAssigning Values to a CRM Entity Object The assignment of entity attribute values differs only slight compared with C# - you just need to ensure that you surround your attribute Logical Name value behind brackets as opposed to square brackets:\nString concatenates also work slightly differently. Be sure to use \u0026amp; as opposed to + to achieve the same purpose. For new line breaks, there is also an equivalent VB.NET snippet that can be used for this, vbCrLf.\nObtaining the Users Display Name Value The final part of the class file is the retrieval of the Full Name value of the user. This has to be done via a Function as opposed to a Dim, as a specific value needs to be returned. Keep in mind the following as well:\nParameters that are fed to the Function must always be prefaced with the ByVal statement - again, another somewhat tedious thing to remember! Note that for the GetAttributeVale method, we specify the attribute data type using the syntax (Of String) as opposed to \u0026lt;string\u0026gt; Other than that, syntax-wise, C# experienced developers should have little trouble re-coding this method into VB.NET. This is evidenced by the fact that the below code snippet is approximately 75% similar to how it needs to be in C#:\nBringing it all together Having gone through the class file from start to bottom, the entire code for the plug-in is reproduced below:\nImports Microsoft.Xrm.Sdk Imports Microsoft.Xrm.Sdk.Query Namespace D365.BlogDemoAssets.VB Public Class PostLeadCreate_GetInitiatingUserExample Implements IPlugin Private Sub IPlugin_Execute(serviceProvider As IServiceProvider) Implements IPlugin.Execute \u0026#39;Obtain the execution context from the service provider. Dim context As IPluginExecutionContext = serviceProvider.GetService(GetType(IPluginExecutionContext)) \u0026#39;Obtain the organization service reference. Dim serviceFactory As IOrganizationServiceFactory = serviceProvider.GetService(GetType(IOrganizationServiceFactory)) Dim service As IOrganizationService = serviceFactory.CreateOrganizationService(context.UserId) \u0026#39;The InputParameters collection contains all the data passed in the message request. If (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) And TypeOf context.InputParameters(\u0026#34;Target\u0026#34;) Is Entity) Then Dim lead As Entity = context.InputParameters(\u0026#34;Target\u0026#34;) \u0026#39;Use the Context to obtain the Guid of the user who triggered the plugin - this is the only piece of information exposed. Dim user As Guid = context.InitiatingUserId \u0026#39;Then, use GetUserDisplayCustom method to retrieve the fullname attribute value for the record. Dim displayName As String = GetUserDisplayName(user, service) \u0026#39;Build out the note record with the required field values: Title, Regarding and Description field Dim note As Entity = New Entity(\u0026#34;annotation\u0026#34;) note(\u0026#34;subject\u0026#34;) = \u0026#34;Test Note\u0026#34; note(\u0026#34;objectid\u0026#34;) = New EntityReference(\u0026#34;lead\u0026#34;, lead.Id) note(\u0026#34;notetext\u0026#34;) = \u0026#34;This is a test note populated with the name of the user who triggered the Post Create plugin on the Lead entity:\u0026#34; \u0026amp; vbCrLf \u0026amp; vbCrLf \u0026amp; \u0026#34;Executing User: \u0026#34; \u0026amp; displayName \u0026#39;Finally, create the record using the IOrganizationService reference service.Create(note) End If End Sub Private Function GetUserDisplayName(ByVal userID As Guid, ByVal service As IOrganizationService) As String Dim user As Entity = service.Retrieve(\u0026#34;systemuser\u0026#34;, userID, New ColumnSet(\u0026#34;fullname\u0026#34;)) Return user.GetAttributeValue(Of String)(\u0026#34;fullname\u0026#34;) End Function End Class End Namespace Conclusions or Wot I Think C# is arguably the de facto choice when programming using D365CE/CRM, and more generally as well for .NET. All of the code examples, both within the SDK and online, will favour C# and I do very much hold the view that development in C# should always be preferred over VB.NET. Is there ever then a good business case for developing in VB.NET over C#? Clearly, if you have a developer available within your business who can do amazing things in VB.NET, it makes sense for this to be the language of choice to use for time-saving purposes. There may also be a case for developing in VB.NET from a proprietary standpoint. VB.NET is, as acknowledged, not as widely disseminated compared with C#. By developing custom plug-ins in VB.NET, that contain sensitive business information, you are arguably safeguarding the business by utilising a language that C# developers may have difficulty in initially deciphering.\nAll being said, C# should be preferred when developing plug-ins, custom workflow assemblies or custom applications involving D365CE/CRM. Where some work could be made is in ensuring that all supported programming languages are adequately provisioned for within the D365CE SDK moving forward. Because, let\u0026rsquo;s be honest - there is no point in supporting something if people have no idea how to use it in the first place.\n","date":"2017-12-31T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/creating-a-dynamics-365-customer-engagement-plug-in-using-vb-net/","title":"Creating a Dynamics 365 Customer Engagement Plug-in Using VB.NET"},{"content":"Working in-depth amidst the Sales entities (e.g. Product, Price List, Quote etc.) within Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) can produce some unexpected complications. What you may think is simple to achieve on the outset, based on how other entities work within the system, often leads you in a completely different direction. A good rule of thumb is that any overtly complex customisations to these entities will mean having to get down and dirty with C#, VB.Net or even JScript. For example, we\u0026rsquo;ve seen previously on the blog how, with a bit of a developer expertise, it is possible to overhaul the entire pricing engine within the application to satisfy specific business requirements. There is no way in which this can be modified directly through the application interface, which can lead to CRM deployments that make imaginative and complicated utilisation of features such as Workflows, Business Rules and other native features. Whilst there is nothing wrong with this approach per-say, the end result is often implementations that look messy when viewed cold and which become increasingly difficult to maintain in the long term. As always, there is a balance to be found, and any approach which makes prudent use of both application features and bespoke code is arguably the most desirous end goal for achieving certain business requirements within CRM/D365CE.\nTo prove my point around Sales entity \u0026ldquo;oddities\u0026rdquo;, a good illustration can be found when it comes to working with relationship field mappings and Product records. The most desirable feature at the disposal of CRM customisers is the ability to configure automated field mapping between Entities that have a one-to-many (1:N) relationship between them. What this means, in simple terms, is that when you create a many (N) record from the parent entity (1), you can automatically copy the field values to a matching field on the related entity. This can help to save data entry time when qualifying a Lead to an Opportunity, as all the important field data you need to continue working on the record will be there ready on the newly created Opportunity record. Field mappings can be configured from the 1:N relationship setting window, via the Mappings button:\nThere are a few caveats to bear in mind - you can only map across fields that have the same underlying data type and you cannot map multiple source fields to the same target (it should be obvious why this is 🙂) - but on the whole, this is a handy application feature that those who are more accustomed to CRM development should always bear in the mind when working with CRM/D365CE.\nField mappings are, as indicated, a standard feature within CRM/D365CE - but when you inspect the field relationships between the Product and Quote Product entity, there is no option to configure mappings at all:\nUpon closer inspection, many of the relationships between the Product entity and others involved as part of the sales order process are missing the ability to configure field mappings. So, for example, if you have a requirement to map across the value of the Description entity to a newly created Quote Product record, you would have to look at implementing a custom plugin to achieve your requirements. The main benefit of this route is that we have relatively unrestricted access to the record data we need as part of a plugin execution session and - in addition - we can piggyback onto the record creation process to add on our required field \u0026ldquo;in-flight\u0026rdquo; - i.e. whilst the record is being created. The code for achieving all of this is as follows:\nusing System; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; namespace D365.BlogDemoAssets.Plugins { public class PreQuoteProductCreate_GetProductAttributeValues : IPlugin { public void Execute(IServiceProvider serviceProvider) { //Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); //Get a reference to the Organization service. IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = factory.CreateOrganizationService(context.UserId); //Extract the tracing service for use in debugging sandboxed plug-ins ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace(\u0026#34;Tracing implemented successfully!\u0026#34;); if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) \u0026amp;\u0026amp; context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Entity qp = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; //Only execute for non-write in Quote Product records EntityReference product = qp.GetAttributeValue\u0026lt;EntityReference\u0026gt;(\u0026#34;productid\u0026#34;); if (product != null) { Entity p = RetrieveProductID(service, product.Id); string desc = p.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;description\u0026#34;); tracingService.Trace(\u0026#34;Product Description = \u0026#34; + desc); qp.Attributes[\u0026#34;description\u0026#34;] = desc; } else { tracingService.Trace(\u0026#34;Quote Product with record ID \u0026#34; + qp.GetAttributeValue\u0026lt;Guid\u0026gt;(\u0026#34;quotedetailid\u0026#34;).ToString() + \u0026#34; does not have an associated Product record, cancelling plugin execution.\u0026#34;); return; } } } public Entity RetrieveProductID(IOrganizationService service, Guid productID) { ColumnSet cs = new ColumnSet(\u0026#34;description\u0026#34;); //Additional fields can be specified using a comma seperated list //Retrieve matching record return service.Retrieve(\u0026#34;product\u0026#34;, productID, cs); } } } They key thing to remember when registering your Plugin via the Plugin Registration Tool (steps which regular readers of the blog should have a good awareness of) is to ensure that the Event Pipeline Stage of Execution is set to Pre-operation. From there, the world is your oyster - you could look at returning additional fields from the Product entity to update on your Quote Product record or you could even look at utilising the same plugin for the Order Product and Invoice Product entities (both of these entities also have Description field, so the above code should work on these entities as well).\nIt\u0026rsquo;s a real shame that Field Mappings are not available to streamline the population of record data from the Product entity; or the fact that there is no way to utilise features such as Workflows to give you an alternate way of achieving the requirement exemplified in this post. This scenario is another good reason why you should always strive to be a Dynamics 365 Swiss Army Knife, ensuring that you have a good awareness of periphery technology areas that can aid you greatly in mapping business requirements to CRM/D365CE.\n","date":"2017-12-24T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/mapping-product-attributes-to-quote-order-invoice-line-items-dynamics-365-customer-engagement/","title":"Mapping Product Attributes to Quote/Order/Invoice Line Items (Dynamics 365 Customer Engagement)"},{"content":"Working with Dynamics CRM/Dynamics 365 Customer Engagement (CRM/D365CE) solution imports can often feel a lot like persuing a new diet or exercise regime; we start out with the best of intentions of how we want things to proceed, but then something comes up to kick the wheel off the wagon and we end up back at square one 🙂 Anything involving a change to an IT system can generally be laborious to implement, due to the dependencies involved, and things can invariably go wrong at any stage in the process. The important thing is to always keep a cool head, take things slowly and try not to overcomplicate things from the outset, as often the simplest or most obvious explanation for an issue is where all due attention should be focused towards.\nIn the case of CRM/D365CE, we have the ability to access full log information relating to a solution import - regardless of whether it has failed or succeeded. This log can prove to be incredibly useful in troubleshooting solution import failures. Available as an XML download, it can be opened within Excel to produce a very readable two tab spreadsheet containing the following information:\nThe Solution tab provides high-level information regarding the solution package, its publisher, the status of the import and any applicable error messages. The Components tab lists every single attempted action that the solution attempted to execute against the target instance, providing a timestamp and any applicable error codes for each one. The above document should always be your first port of call when a solution import fails, and it will almost certainly allow you to identify the root cause of the failure - as it did for me very recently.\nAn unmanaged solution import failed with the top-level error message Fields that are not valid were specified for the entity. Upon closer investigation within the import log, I was able to identify the affected component - a custom attribute on the Quote entity - and the specific error message generated - Attribute\u0026hellip;has SourceType 0, but 1 was specified:\nThe reason why the error was being generated is that a field with the same logical name was present within the environment, something which - for clearly understandable reasons - is not allowed. In this particular scenario, we were doing some tidy up of an existing solution and replacing a calculated field with a new field, with a different data type, using the same attribute name. The correct step that should have been taken before the solution import was to delete the \u0026ldquo;old\u0026rdquo; field in the target environment, but this was accidentally not included in the release notes. After completing this and re-attempting the solution import, it completed successfully.\nThe likelihood of this error ever occurring in the first place should be remote, assuming that you are customising your system the right way (i.e. using Solution Publisher prefixes for all custom attributes/entities). In this occasion, the appropriate note as part of the release documentation for the solution would have prevented the issue from occurring in the first place. So, as long as you have implemented a sufficiently robust change management procedure, that includes full instructions that are required to be completed both before and after a solution import, you can avoid a similar situation when it comes to replacing entity attributes within your CRM/D365CE solution.\n","date":"2017-12-17T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/fields-that-are-not-valid-were-specified-for-the-entity-solution-import-error-dynamics-365-customer-engagement/","title":"\"Fields that are not valid were specified for the entity.\" Solution Import Error (Dynamics 365 Customer Engagement)"},{"content":"Microsoft Flow is a tool that I increasingly have to bring front and centre when considering how to straightforwardly accommodate certain business requirements. The problem I have had with it, at times, is that there are often some notable caveats when attempting to achieve something that looks relatively simple from the outset. A good example of this is the SQL Server connector which, based on headline reading, enables you to trigger workflows when rows are added or changed within a database. Having the ability to trigger an email based on a database record update, create a document on OneDrive or even post a Tweet based on a modified database record are all things that instantly have a high degree of applicability for any number of different scenarios. When you read the fine print behind this, however, there are a few things which you have to bear in mind:\nLimitations\nThe triggers do have the following limitations:\nIt does not work for on-premises SQL Server Table must have an IDENTITY column for the new row trigger Table must have a ROWVERSION (a.k.a. TIMESTAMP) column for the modified row trigger A slightly frustrating side to this is that Microsoft Flow doesn\u0026rsquo;t intuitively tell you when your table is incompatible with the requirements - contrary to what is stated in the above post. Whilst readers of this post may be correct in chanting \u0026ldquo;RTFM!\u0026rdquo;, it still would be nice to be informed of any potential incompatibilities within Flow itself. Certainly, this can help in preventing any needless head banging early on 🙂\nGetting around these restrictions are fairly straightforward if you have the ability to modify the table you are wanting to interact with using Flow. For example, executing the following script against the MyTable table will get it fully prepped for the service:\nALTER TABLE dbo.MyTable ADD [FlowID] INT IDENTITY(1,1) NOT NULL, [RowVersion] ROWVERSION Accepting this fact, there may be certain situations when this is not the best option to implement:\nThe database/tables you are interacting with form part of a propriety application, therefore making it impractical and potentially dangerous to modify table objects. The table in question could contain sensitive information. Keep in mind the fact that the Microsoft Flow service would require service account access with full SELECT privileges against your target table. This could expose a risk to your environment, should the credentials or the service itself be compromised in future. If your target table already contains an inordinately large number of columns and/or rows, then the introduction of additional columns and processing via an IDENTITY/ROWVERSION seed could start to tip your application over the edge. Your target database does not use an integer field and IDENTITY seed to uniquely identify columns, meaning that such a column needs to (arguably unnecessarily) added. An alternative approach to consider would be to configure a \u0026ldquo;gateway\u0026rdquo; table for Microsoft Flow to access - one which contains only the fields that Flow needs to process with, is linked back to the source table via a foreign key relationship and which involves the use of a database trigger to automate the creation of the \u0026ldquo;gateway\u0026rdquo; record. Note that this approach only works if you have a unique row identifier in your source table in the first place; if your table is recording important, row-specific information and this is not in place, then you should probably re-evaluate your table design ;)\nLet\u0026rsquo;s see how the above example would work in practice, using the following example table:\nCREATE TABLE [dbo].[SourceTable] ( [SourceTableUID] UNIQUEIDENTIFIER PRIMARY KEY NOT NULL, [SourceTableCol1] VARCHAR(50) NULL, [SourceTableCol2] VARCHAR(150) NULL, [SourceTableCol3] DATETIME NULL ) In this scenario, the table object is using the UNIQUEIDENTIFIER column type to ensure that each row can be\u0026hellip;well\u0026hellip;uniquely identified!\nThe next step would be to create our \u0026ldquo;gateway\u0026rdquo; table. Based on the table script above, this would be built out via the following script:\nCREATE TABLE [dbo].[SourceTableLog] ( [SourceTableLogID] INT IDENTITY(1,1) NOT NULL PRIMARY KEY, [SourceTableUID] UNIQUEIDENTIFIER NOT NULL, CONSTRAINT FK_SourceTable_SourceTableLog FOREIGN KEY ([SourceTableUID]) REFERENCES [dbo].[SourceTable] ([SourceTableUID]) ON DELETE CASCADE, [TimeStamp] ROWVERSION ) The use of a FOREIGN KEY here will help to ensure that the \u0026ldquo;gateway\u0026rdquo; table stays tidy in the event that any related record is deleted from the source table. This is handled automatically, thanks to the ON DELETE CASCADE option.\nThe final step would be to implement a trigger on the dbo.SourceTable object that fires every time a record is INSERTed into the table:\nCREATE TRIGGER [trInsertNewSourceTableToLog] ON [dbo].[SourceTable] AFTER INSERT AS BEGIN INSERT INTO [dbo].[SourceTableLog] ([SourceTableLogUID]) SELECT [SourceTableUID] FROM inserted END For those unfamiliar with how triggers work, the inserted table is a special object exposed during runtime that allows you to access the values that have been\u0026hellip;OK, let\u0026rsquo;s move on!\nWith all of the above in place, you can now implement a service account for Microsoft Flow to use when connecting to your database that is sufficiently curtailed in its permissions. This can either be a database user associated with a server level login:\nCREATE USER [mydatabase-flow] FOR LOGIN [mydatabase-flow] WITH DEFAULT_SCHEMA = dbo GO GRANT CONNECT TO [mydatabase-flow] GO GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow] GO Or a contained database user account (this would be my recommended option):\nCREATE USER [mydatabase-flow] WITH PASSWORD = \u0026#39;P@ssw0rd1\u0026#39;, DEFAULT_SCHEMA = dbo GO GRANT CONNECT TO [mydatabase-flow] GO GRANT SELECT ON [dbo].[SourceTableLog] TO [mydatabase-flow] GO From there, the world is your oyster - you can start to implement whatever action, conditions etc. that you require for your particular requirement(s). There are a few additional tips I would recommend when working with SQL Server and Azure:\nIf you need to retrieve specific data from SQL, avoid querying tables directly and instead encapsulate your logic into Stored Procedures instead. In line with the ethos above, ensure that you always use a dedicated service account for authentication and scope the permissions to only those that are required. If working with Azure SQL, you will need to ensure that you have ticked the Allow access to Azure services options on the Firewall rules page of your server. Despite some of the challenges you may face in getting your databases up to spec to work with Microsoft Flow, this does not take away from the fact that the tool is incredibly effective in its ability to integrate disparate services together, once you have overcome some initial hurdles at the starting pistol.\n","date":"2017-12-10T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/getting-around-sql-server-table-restrictions-microsoft-flow/","title":"Getting Around SQL Server Table Restrictions (Microsoft Flow)"},{"content":"In last week\u0026rsquo;s post, we took a look at how a custom Workflow activity can be implemented within Dynamics CRM/Dynamics 365 for Customer Engagement to obtain the name of the user who triggered the workflow. It may be useful to retrieve this information for a variety of different reasons, such as debugging, logging user activity or to automate the population of key record information. I mentioned in the post the \u0026ldquo;treasure trove\u0026rdquo; of information that the IWorkflowContext interface exposes to developers. Custom Workflow activities are not unique in having execution-specific information exposable, with an equivalent interface at our disposal when working with plug-ins. No prizes for guessing its name - the IPluginExecutionContext.\nWhen comparing both interfaces, some comfort can be found in that they share almost identical properties, thereby allowing us to replicate the functionality demonstrated in last weeks post as Post-Execution Create step for the Lead entity. The order of work for this is virtually the same:\nDevelop a plug-in C# class file that retrieves the User ID of the account that has triggered the plugin. Add supplementary logic to the above class file to retrieve the Display Name of the User. Deploy the compiled .dll file into the application via the Plug-in Registration Tool, adding on the appropriate execution step. The emphasis on this approach, as will be demonstrated, is much more focused towards working outside of the application; something you may not necessarily be comfortable with. Nevertheless, I hope that the remaining sections will provide enough detail to enable you to replicate within your own environment.\nDeveloping the Class File As before, you\u0026rsquo;ll need to have ready access to a Visual Studio C# Class file project and the Dynamics 365 SDK. You\u0026rsquo;ll also need to ensure that your project has a Reference added to the Microsoft.Xrm.Sdk.dll. Create a new Class file and copy and paste the following code into the window:\nusing System; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; namespace D365.BlogDemoAssets.Plugins { public class PostLeadCreate_GetInitiatingUserExample : IPlugin { public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); // Obtain the organization service reference. IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); // The InputParameters collection contains all the data passed in the message request. if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) \u0026amp;\u0026amp; context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Entity lead = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; //Use the Context to obtain the Guid of the user who triggered the plugin - this is the only piece of information exposed. Guid user = context.InitiatingUserId; //Then, use GetUserDisplayCustom method to retrieve the fullname attribute value for the record. string displayName = GetUserDisplayName(user, service); //Build out the note record with the required field values: Title, Regarding and Description field Entity note = new Entity(\u0026#34;annotation\u0026#34;); note[\u0026#34;subject\u0026#34;] = \u0026#34;Test Note\u0026#34;; note[\u0026#34;objectid\u0026#34;] = new EntityReference(\u0026#34;lead\u0026#34;, lead.Id); note[\u0026#34;notetext\u0026#34;] = @\u0026#34;This is a test note populated with the name of the user who triggered the Post Create plugin on the Lead entity:\u0026#34; + Environment.NewLine + Environment.NewLine + \u0026#34;Executing User: \u0026#34; + displayName; //Finally, create the record using the IOrganizationService reference service.Create(note); } } } } Note also that you will need to rename the namespace value to match against the name of your project.\nTo explain, the code replicates the same functionality developed as part of the Workflow on last week\u0026rsquo;s post - namely, create a Note related to a newly created Lead record and populate it with the Display Name of the User who has triggered the plugin.\nRetrieving the User\u0026rsquo;s Display Name After copying the above code snippet into your project, you may notice a squiggly red line on the following method call:\nThe GetUserDisplayName is a custom method that needs to be added in manually and is the only way in which we can retrieve the Display Name of the user, which is not returned as part of the IPluginExecutionContext. We, therefore, need to query the User (systemuser) entity to return the Full Name (fullname) field, which we can then use to populate our newly create Note record. We use a custom method to return this value, which is provided below and should be placed after the last 2 curly braces after the Execute method, but before the final 2 closing braces:\nprivate string GetUserDisplayName(Guid userID, IOrganizationService service) { Entity user = service.Retrieve(\u0026#34;systemuser\u0026#34;, userID, new ColumnSet(\u0026#34;fullname\u0026#34;)); return user.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;fullname\u0026#34;); } Deploy to the application using the Plug-in Registration Tool The steps involved in this do not differ greatly from what was demonstrated in last week\u0026rsquo;s post, so I won\u0026rsquo;t repeat myself. 🙂 The only thing you need to make sure you do after you have registered the plug-in is to configure the plug-in Step. Without this, your plug-in will not execute. Right-click your newly deployed plug-in on the main window of the Registration Tool and select Register New Step:\nOn the form that appears, populate the fields/values indicated below:\nMessage: Create Primary Entity: Lead Run in User\u0026rsquo;s Context: Calling User Event Pipeline Stage of Execution: Post-Operation The window should look similar to the below if populated correctly. If so, then you can click Register New Step to update the application:\nAll that remains is to perform a quick test within the application by creating a new Lead record. After saving, we can then verify that the plug-in has created the Note record as intended:\nHaving compared both solutions to achieve the same purpose, is there a recommended approach to take? The examples shown in the past two blog posts indicate excellently how solutions to specific scenarios within the application can be achieved via differing ways. As clearly evidenced, one could argue that there is a code-heavy (plug-in) and a light-touch coding (custom Workflow assembly) option available, depending on how comfortable you are with working with the SDK. Plug-ins are a natural choice if you are confident working solely within Visual Studio or have a requirement to perform additional business logic as part of your requirements. This could range from complex record retrieval operations within the application or even an external integration piece involving specific and highly tailored code. The Workflow path clearly favours those of us who prefer to work within the application in a supported manner and, in this particular example, can make certain tasks easier to accomplish. As we have seen, the act of retrieving the Display Name of a user is greatly simplified when we go down the Workflow route. Custom Workflow assemblies also offer greater portability and reusability, meaning that you can tailor logic that can be applied to multiple different scenarios in the future. Code reusability is one of the key drivers in many organisations these days, and the use of custom Workflow assemblies neatly fits into this ethos.\nThese are perhaps a few considerations that you should make when choosing the option that fits the needs of your particular requirement, but it could be that the way you feel most comfortable with ultimately wins the day - so long as this does not compromise the organisation as a consequence, then this is an acceptable stance to take. Hopefully, this short series of posts have demonstrated the versatility of the application and the ability to approach challenges with equally acceptable pathways for resolution.\n","date":"2017-12-03T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/determining-the-initiating-user-details-on-a-c-plug-in-dynamics-365-for-customer-engagement/","title":"Determining the Initiating User Details on a C# Plug-in (Dynamics 365 for Customer Engagement)"},{"content":"It\u0026rsquo;s sometimes useful to determine the name of the user account that executes a Workflow within Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE). What can make this a somewhat fiendish task to accomplish is the default behaviour within the application, which exposes very little contextual information each time a Workflow is triggered. Take, for example, the following simplistic Workflow which creates an associated Note record whenever a new Lead record is created:\nThe Note record is set to be populated with the default values available to us regarding the Workflow execution session - Activity Count, Activity Count including Process and Execution Time:\nWe can verify that this Workflow works - and view the exact values of these details - by creating a new Lead record and refreshing the record page:\nThe Execution Time field is somewhat useful, but the Activity Count \u0026amp; Activity Count including Process values relate to Workflow execution sessions and are only arguably useful for diagnostic review - not something that end users of the application will generally be interested in. 🙂\nGoing back to the opening sentence of this post, if we were wanting to develop this example further to include the Name of the user who executed the Workflow in the note, we would have to look at deploying a Custom Workflow Assembly to extract the information out. The IWorkflowContext Interface is a veritable treasure trove of information that can be exposed to developers to retrieve not just the name of the user who triggers a Workflow, but the time when the corresponding system job was created, the Business Unit it is being executed within and information to determine whether the Workflow was triggered by a parent. There are three steps involved in deploying out custom code into the application for utilisation in this manner:\nDevelop a CodeActivity C# class file that performs the desired functionality. Deploy the compiled .dll file into the application via the Plugin Registration Tool. Modify the existing Workflow to include a step that accesses the custom Workflow Activity. All of these steps will require ready access to Visual Studio, a C# class plugin project (either a new one or existing) and the CRM SDK that corresponds to your version for the application.\nDeveloping the Class File To begin with, make sure your project includes References to the following Frameworks:\nSystem.Activities Microsoft.Xrm.Sdk Microsoft.Xrm.Sdk.Workflow Add a new Class (.cs) file to your project and copy \u0026amp; paste the below code, overwriting any existing code in the window. Be sure to update the namespace value to reflect your project name:\nusing System.Activities; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Workflow; namespace D365.Demo.Plugins { public class GetWorkflowInitiatingUser : CodeActivity { protected override void Execute(CodeActivityContext executionContext) { IWorkflowContext workflowContext = executionContext.GetExtension\u0026lt;IWorkflowContext\u0026gt;(); CurrentUser.Set(executionContext, new EntityReference(\u0026#34;systemuser\u0026#34;, workflowContext.InitiatingUserId)); } [Output(\u0026#34;Current User\u0026#34;)] [ReferenceTarget(\u0026#34;systemuser\u0026#34;)] public OutArgument\u0026lt;EntityReference\u0026gt; CurrentUser { get; set; } } } Right-click your project and select Build. Verify that no errors are generated and, if so, then that\u0026rsquo;s the first step done and dusted 🙂\nDeploy to CRM/D365CE Open up the Plugin Registration Tool and connect to your desired instance. If you are deploying an existing, updated plugin class, then right-click it on the list of Registered Plugins \u0026amp; Custom Workflow Activities and click Update; otherwise, select Register -\u0026gt; Register New Assembly. The same window opens in any event. Load the newly built assembly from your project (can be located in the \\bin\\Debug\\ folder by default) and ensure the Workflow Activity entry is ticked before selecting Register Selected Plugins:\nAfter registration, the Workflow Activity becomes available for use within the application; so time to return to the Workflow we created earlier!\nAdding the Custom Workflow Activity to a Process By deactivating the Workflow Default Process Values Example Workflow and selecting Add Step, we can verify that the Custom Workflow Assembly is available for use:\nSelect the above, making sure first of all that the option Insert Before Step is toggled (to ensure it appears before the already configured Create Note for Lead step). It should look similar to the below if done correctly:\nNow, when we go and edit the Create Note for Lead step, we will see a new option under Local Values which, when selected, bring up a whole range of different fields that correspond to fields from the User Entity. Modify the text within the Note to retrieve the Full Name value and save it onto the Note record, as indicated below:\nAfter saving and reactivating the Workflow, we can verify its working by again creating a new Lead record and refreshing to review the Note text:\nAll working as expected!\nThe example shown in this post has very limited usefulness in a practical business scenario, but could be useful in different circumstances:\nIf your Workflow contains branching logic, then you can test to see if a Workflow has executed by a specific user and then perform bespoke logic based on this value. Records can be assigned to other users/teams, based on who has triggered the Workflow. User activity could be recorded in a separate entity for benchmarking/monitoring purposes. It\u0026rsquo;s useful to know as well that the same kind of functionality can also be deployed when working with plugins as well in the application. We will take a look at how this works as part of next week\u0026rsquo;s blog post.\n","date":"2017-11-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/obtaining-the-user-who-executed-a-workflow-in-dynamics-365-for-customer-engagement-c-workflow-activity/","title":"Obtaining the User who executed a Workflow in Dynamics 365 for Customer Engagement (C# Workflow Activity)"},{"content":"I took some time out this week to head down to Microsoft\u0026rsquo;s Reading offices for the November CRMUG meeting. There is often a whole host of reasons that can be conjured up to excuse yourself from events like this - \u0026ldquo;I\u0026rsquo;m too busy at work!\u0026rdquo;, \u0026ldquo;It\u0026rsquo;s such a long way away!\u0026rdquo; etc. - but, ultimately, it\u0026rsquo;s always best to make the effort and get involved. The theme of the day was around Awareness of your CRM system, which was neatly kicked off by a short presentation from Microsoft on the current roadmap for Dynamics 365 for Customer Engagement (D365CE). There was a clear emphasis towards GDPR on some of the available presentation tracks, a topic that regular readers of the blog should be well prepared for I hope. 🙂 Another key aspect of the day was networking, with ample opportunities to meet new people and to understand their current journey involving CRM/D365CE. Here are my thoughts on the sessions I attended, along with some closing remarks on why these types of events are always beneficial.\nAccelerate GDPR with Microsoft Cloud The first talk I attended was all about GDPR from Microsoft\u0026rsquo;s perspective. The session was co-led by David Hirst and David Reid from Microsoft and did a really good job in setting out the GDPR stall for the uninitiated, as well as offering some pointers towards Microsoft solutions/technologies that may prove beneficial towards achieving compliance. There were also some fascinating anecdotal pieces, such as, for example, the story of a UK based pub chain who has decided to completely remove all customer email address data from their systems, presumably with GDPR in mind. An extreme, but arguably pragmatic, approach.\nThe talk came across as refreshingly candid, with a real demonstrable attempt of portraying a concerted effort behind the scenes at Microsoft to ensure that they - and their entire product range - are prepared for GDPR. Microsoft is not just talking the talk when it comes to GDPR (which, to be frank, can often result in a lot of scaremongering by other companies), but are instead providing current and new customers with the tools and information they need to streamline their route towards compliance. The key takeaway from the session, which was borne out by some of the Q\u0026amp;A\u0026rsquo;s at the end, is that it\u0026rsquo;s naive to assume that technology companies like Microsoft can provide a \u0026ldquo;silver bullet\u0026rdquo; solution to deal with all of your GDPR woes. Organisations need to go away and do a lot of the hard work when it comes to determining the type of data they hold across the entire organisation, whether the source of consent for this could be considered risky and to implement the appropriate business processes and technological helper tools to make dealing with things such as subject access requests as simple as possible.\nWhat is GDPR and how it impacts your business and your Dynamics 365 solutions, Get Ready for your new legal obligations. The next talk was (again!) on GDPR and was presented by CRM MVP, Mohamed Mostafa, and was specifically focused on GDPR in the context of D365CE. Mohamed\u0026rsquo;s talk was very focused, assisted by some great visual aids, and he also presented some interesting examples on how you can leverage existing application features to help you towards GDPR compliance. Plenty of food for thought!\nOne area mentioned by Mohamed in his presentation which I perhaps disagree with him on (sorry!) is the emphasis placed on the massive fine figures that are often quoted when it comes to GDPR. A heavy focus towards this does, in my view, present a degree of scaremongering. This is confirmed by the fact that Elizabeth Denham, the Information Commissioner, has gone public herself on the whole issue and cautions businesses to be wary of the \u0026ldquo;massive fines\u0026rdquo; narrative. I agree with her assessment, and that fines should always be considered a \u0026ldquo;last resort\u0026rdquo; in targeting organisations that have demonstrated a willful disregard for their obligations in handling personal data. My experience with the ICO on a personal level backs this up, and I have always found them to be fair and proportional when dealing with organisations who are trying to do the best they can. GDPR presents a real opportunity for organisations to get to grips with how they handle their personal data, and I encourage everyone to embrace it and to make the necessary changes to accommodate. But, by the same token, organisations should not be panic-stricken into a narrative that causes them to adopt unnecessary technologies under the whole \u0026ldquo;silver bullet\u0026rdquo; pretence.\nWhat\u0026rsquo;s new in Dynamics 365 9.0 To date, I have not had much of a chance to play around in detail with version 9.0 of D365CE. For this reason, MVP Sarah Critchley\u0026rsquo;s talk ranked highly on the agenda for me. Sarah\u0026rsquo;s enthusiasm for the application is infectious, and she covered a wide breadth of the more significant new features that can be found in the new version of the application, including (but not limited to):\nPresentation changes to the Sitemap Introduction to Virtual Entities and how to set them up Changes to the mobile application Sarah framed all of the changes with a before/after comparison to version 8.2 of the application, thereby allowing the audience to contextualise the changes a lot better. The best thing that I liked about the whole presentation is that it scratched beneath the surface to highlight less noticeable changes that may have a huge impact for end-users of the application. Attention was paid to the fact that the web application refresh is now a fully mobile responsive template, meaning that it adjusts automatically to suit a mobile or tablet device screen size. Another thing which I didn\u0026rsquo;t know about the new Virtual Entities feature is that they can be used as Lookup fields on related entities. This immediately expands their versatility, and I am looking forward to seeing how the feature develops in the future.\nImplementing a Continuous Integration Strategy with Dynamics 365 I\u0026rsquo;ll admit that I went into the final talk of the day with Ben Walker not 100% sure what to expect, but walked away satisfied that it was perhaps the most underrated session of the day 🙂 Ben took us through his journey of implementing a continuous integration strategy (translation: testing through the development process and automating the deployment process) for CRM 2015 in his current role, and he should be proud of what he has achieved in this respect. Ben showed the room a number of incredibly useful developer tidbits, such as:\nThe ability to export CRM/D365CE solution information into Visual Studio and then sync up to a Git repository. Deep integrating of unit testing, via the FakeXrmEasy framework. The ability to trigger automated builds in TFS after a code check-in, which can then be used to push out solution updates into a dev/test environment automatically. With the additional option of allowing somebody else to approve the deployment before it starts. All of the above Ben has been able to build together as an end to end process which looks almost effortless in its construction. The benefit - and main caveat from the whole session, it has to be said - is that Ben is primarily working within an on-premise CRM environment and is using tools which may not be fully supported with online versions of the application. For example, the ability to deploy Solution updates via PowerShell is definitely not supported by D365CE online. Despite this, Ben\u0026rsquo;s presentation should have left everyone in the room with enough things to go away with, research and implement to make their CRM/D365CE development more of a breeze in future.\nConclusions or Wot I Think This was my first time attending a CRMUG meeting, and I am glad that I finally found the time to do so. As Sarah highlighted at the start of the day, the key benefit of the whole event is the opportunity to network, and I had ample opportunity to meet face-to-face some of my CRM heroes, as well as others working in the industry. It can often feel like a lonely journey working with applications like CRM/D365CE, particularly if you are working as part of a small team within your business. Events such as these very much bring you together with other like-minded individuals, who will happily talk to you non-stop about their passion for the application and technology. And, because the events are closely supported by Microsoft, it means that Tuesday\u0026rsquo;s meeting allowed for lots of authoritative information to come to fore throughout the entire day. I am very much looking forward to attending my next CRMUG meeting in the future and would urge anyone with at least a passing interest in the world of CRM/D365CE to consider attending their next local meeting.\n","date":"2017-11-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/crmug-november-reading-meeting-2017-review/","title":"CRMUG November Reading Meeting 2017 Review"},{"content":"When it comes to technology learning, it can often feel as if you are fighting against a constant wave of change, as studying is outpaced by the introduction of new technical innovations. Fighting the tide is often the most desirous outcome to work towards, but it can be understandable why individuals choose to specialise in a particular technology area. There is no doubt some comfort in becoming a subject matter expert and in not having to worry about \u0026ldquo;keeping up with the Joneses\u0026rdquo;. However, when working with an application such as Dynamics 365 for Customer Engagement (D365CE), I would argue it is almost impossible to ignore the wider context of what sit\u0026rsquo;s alongside the application, particularly Azure, Microsoft\u0026rsquo;s cloud as a service platform. Being able to understand how the application can be extended via external integrations is typically high on the list of any project requirements, and often these integrations require a light-touch Azure involvement, at a minimum. Therefore, the ability to say that you are confident in accomplishing certain key tasks within Azure instantly puts you ahead of others and in a position to support your business/clients more straightforwardly.\nHere are 4 good reasons why you should start to familiarise yourself with Azure, if you haven\u0026rsquo;t done so already, or dedicate some additional time towards increasing your knowledge in an appropriate area:\nDynamics 365 for Customer Engagement is an Azure application Well\u0026hellip;we can perhaps not answer this definitively and say that 100% of D365CE is hosted on Azure (I did hear a rumour that some aspects of the infrastructure were hosted on AWS). Certainly, for instances that are provisioned within the UK, there is ample evidence to suggest this to be the case. What can be said with some degree of certainty is that D365CE is an Azure leveraged application. This is because it uses key aspects of the service to deliver various functionality within the application:\nAzure Active Directory: Arguably the crux of D365CE is the security/identity aspect, all of which is powered using Microsoft\u0026rsquo;s cloud version of Active Directory. Azure Key Vault: Encryption is enabled by default on all D365CE databases, and the management of encryption keys is provided via Azure Key Vault. Office 365: Similar to D365CE, Office 365 is - technically - an Azure cloud service provided by Microsoft. As both Office 365 and D365CE often need to be tightly knitted together, via features such as Server-Side Synchronisation, Office 365 Groups and SharePoint document management, it can be considered a de facto part of the base application. It\u0026rsquo;s fairly evident, therefore, that D365CE can be considered as a Software as a Service (SaaS) application hosted on Azure. But why is all this important? For the simple reason that, because as a D365CE professional, you will be supporting the full breadth of the application and all it entails, you are already an Azure professional by default. Not having a cursory understanding of Azure and what it can offer will immediately put you a detriment to others who do, and increasingly places you in a position where your D365CE expertise is severely blunted.\nIt proves to prospective employers that you are not just a one trick pony When it comes to interviews for roles focused around D365CE, I\u0026rsquo;ve been at both sides of the table. What I\u0026rsquo;ve found separates a good D365CE CV from an excellent one all boils down to how effectively the candidate has been able to expand their knowledge into the other areas. How much additional knowledge of other applications, programming languages etc. does the candidate bring to the business? How effectively has the candidate moved out of their comfort zone in the past in exploring new technologies, either in their current roles or outside of work? More importantly, how much initiative and passion has the candidate shown in embracing changes? A candidate who is able to answer these questions positively and is able to attribute, for example, extensive knowledge of Azure will instantly move up in my estimation of their ability. On the flip side of this, I believe that interviews that have resulted in a job offer for me have been helped, in no small part, to the additional technical skills that I can make available to a prospective employer.\nTo get certain things done involving D365CE, Azure knowledge is a mandatory requirement I\u0026rsquo;ve talked about one of these tasks before on the blog, namely, how to setup the Azure Data Export solution to automatically synchronise your application data to an Azure SQL Database. Unless you are in the fortunate position of having an Azure savvy colleague who can assist you with this, the only way you are going to be able to successfully complete this task is to know how to deploy an Azure SQL Server instance, a database for this instance and the process for setting up an Azure Key Vault. Having at least some familiarity with how to deploy simple resources in Azure and accomplish tasks via PowerShell script execution will place you in an excellent position to achieve the requirements of this task, and others such as:\nIntegrating D365CE with Azure Logic Apps to accomplish simple/complex integration tasks between different application systems. Provisioning and licensing external users to access your D365CE organisation via Azure Active Directory B2B Collaboration. Bridging D365CE with the full suite of features within Azure, such as cloud storage, database integration or web applications, via the Azure Service Bus. Expanding the amount of storage available on your portal by activating the Azure Storage feature. The above is just a flavour of some of the things you can do with D365CE and Azure together, and there are doubtless many more I have missed 🙂 The key point I would highlight is that you should not just naively assume that D365CE is containerised away from Azure; in fact, often the clearest and cleanest way of achieving more complex business/technical requirements will require a detailed consideration of what can be built out within Azure.\nThere\u0026rsquo;s really no good reason not to, thanks to the wealth of resources available online for Azure training. A sea change seems to be occurring currently at Microsoft with respect to online documentation/training resources. Previously, TechNet and MSDN would be your go-to resources to find out how something Microsoft related works. Now, the Microsoft Docs website is where you can find the vast majority of technical documentation. I really rate the new experience that Microsoft Docs provides, and there now seems to be a concerted effort to ensure that these articles are clear, easy to follow and include end-to-end steps on how to complete certain tasks. This is certainly the case for Azure and, with this in mind, I defy anyone to find a reasonable enough excuse not to begin reading through these articles. They are the quickest way towards expanding your knowledge within an area of Azure that interests you the most or to help prepare you to, for example, setup a new Azure SQL database from scratch.\nFor those who learn better via visual tools, Microsoft has also greatly expanded the number of online video courses available for Azure, that can be accessed for free. There are also some excellent, \u0026ldquo;deep-dive\u0026rdquo; topic areas that can also be used to help prepare you for Azure certification.\nConclusions or Wot I Think I use the term \u0026ldquo;D365CE professional\u0026rdquo; a number of times throughout this post. This is a perhaps unhelpful label to ascribe to anyone working with D365CE today. A far better title is, I would argue, \u0026ldquo;Microsoft cloud professional\u0026rdquo;, as this gets to the heart of what I think anyone who considers themselves a D365CE \u0026ldquo;expert\u0026rdquo; should be. Building and supporting solutions within D365CE is by no means an isolated experience, as you might have argued a few years back. Rather, the onus is on ensuring that consultants, developers etc. are as multi-faceted as possible from a skillset perspective. I talked previously on the blog about becoming a swiss army knife in D365CE. Whilst this is still a noble and recommended goal, I believe casting the net wider can offer a number of benefits not just for yourself, but for the businesses and clients you work with every day. It puts you centre-forward in being able to offer the latest opportunities to implement solutions that can increase efficiency, reduce costs and deliver positive end-user experiences. And, perhaps most importantly, it means you can confidently and accurately attest to your wide-ranging expertise in any given situation.\n","date":"2017-11-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/4-reasons-why-dynamics-365-for-customer-engagement-professionals-should-increase-their-microsoft-azure-knowledge/","title":"4 Reasons Why Dynamics 365 for Customer Engagement Professionals Should Increase Their Microsoft Azure Knowledge"},{"content":"The world of database security and protection can be a difficult path to tread at times. I often find myself having to adopt a \u0026ldquo;tin-foil hat\u0026rdquo; approach, obsessing over the smallest potential vulnerability that a database could be compromised with. This thought process can be considered easy compared with any protective steps that need to be implemented in practice, as these can often prove to be mind-bogglingly convoluted. This is one of the reasons why I like working with Microsoft Azure and features such as Azure SQL Database Firewall Rules. They present a familiar means of securing your databases to specific IP address endpoints and are not inordinately complex in how they need to be approached; just provide a name, Start/End IP range and hey presto! Your client/ application can communicate with your database. The nicest thing about them is that the feature is enabled by default, meaning you don\u0026rsquo;t have to worry about designing and implementing a solution to restrict your database from unauthorised access at the outset.\nAs alluded to above, Database Firewall Rules are added via T-SQL code (unlike Server Rules, which can be specified via the Azure portal), using syntax that most SQL developers should feel comfortable using. If you traditionally prefer to design and build your databases using a Visual Studio SQL Database project, however, you may encounter a problem when looking to add a Database Firewall rule to your project. There is no dedicated template item that can be used to add this to the database. In this eventuality, you would have to look at setting up a Post-Deployment Script or Pre-Deployment Script to handle the creation of any requisite rules you require. Yet this can present the following problems:\nVisual Studio will be unable to provide you with the basic syntax to create the rules. Related to the above, Intellisense support will be limited, so you may struggle to identify errors in your code until it is deployed. When deploying changes out to your database, the project will be unable to successfully detect (and remove) any rules that are deleted from your project. The last one could prove to be particularly cumbersome if you are tightly managing the security of your Azure SQL database. Putting aside the obvious risk of someone forgetting to remove a rule as part of a deployment process, you would then have to manually remove the rules by connecting to your database and executing the following T-SQL statement:\nEXECUTE sp_delete_database_firewall_rule \u0026#39;MyDBFirewallRule\u0026#39; Not the end of the world by any stretch, but if you are using Visual Studio as your deployment method for managing changes to your database, then having to do this step seems a little counter-intuitive. Fortunately, with a bit of creative thinking and utilisation of more complex T-SQL functionality, we can get around the issue by developing a script that carries out the following steps in order:\nRetrieve a list of all current Database Firewall Rules. Iterate through the list of rules and remove them all from the database. Proceed to re-create the required Database Firewall Rules from scratch The second step involves the use of a T-SQL function that I have traditionally steered away from using - Cursors. This is not because they are bad in any way but because a) I have previously struggled to understand how they work and b) have never found a good scenario in which they could be used in. The best way of understanding them is to put on your C# hat for a few moments and consider the following code snippet:\nstring[] array = new string[] { \u0026#34;Test1\u0026#34;, \u0026#34;Test2\u0026#34;, \u0026#34;Test3\u0026#34; }; foreach(string s in array) { Console.WriteLine(s); } To summarise how the above works, we take our collection of values - Test1, Test2 and Test3 - and carry out a particular action against each; in this case, print out their value into the console. This, in a nutshell, is how Cursors work, and you have a great deal of versatility on what action you take during each iteration of the \u0026ldquo;loop\u0026rdquo;.\nWith a clear understanding of how Cursors work. the below script that accomplishes the aims set out above should hopefully be a lot clearer:\nDECLARE @FirewallRule NVARCHAR(128) DECLARE REMOVEFWRULES_CURSOR CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR SELECT DISTINCT [name] FROM sys.database_firewall_rules OPEN REMOVEFWRULES_CURSOR FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule WHILE @@FETCH_STATUS = 0 BEGIN EXECUTE sp_delete_database_firewall_rule @FirewallRule PRINT \u0026#39;Firewall rule \u0026#39; + @FirewallRule + \u0026#39; has been successfully deleted.\u0026#39; FETCH NEXT FROM REMOVEFWRULES_CURSOR INTO @FirewallRule END CLOSE REMOVEFWRULES_CURSOR DEALLOCATE REMOVEFWRULES_CURSOR GO EXECUTE sp_set_database_firewall_rule @name = N\u0026#39;MyDBFirewallRule1\u0026#39;, @start_ip_address = \u0026#39;1.2.3.4\u0026#39;, @end_ip_address = \u0026#39;1.2.3.4\u0026#39;; EXECUTE sp_set_database_firewall_rule @name = N\u0026#39;MyDBFirewallRule2\u0026#39;, @start_ip_address = \u0026#39;1.2.3.4\u0026#39;, @end_ip_address = \u0026#39;1.2.3.4\u0026#39;; To integrate as part of your existing database project, add a new Post-Deployment Script file and modify the above to reflect your requirements. As the name indicates, the script will run after all other aspects of your solution deployment has been completed. Now, the key caveat to bear in mind with this solution is that, during deployment, there will be a brief period of time where all Database Firewall Rules are removed from the database. This could potentially prevent any current database connections from dropping or failing to connect altogether. You should take care when using the above code snippet within a production environment and I would recommend you look at an alternative solution if your application/system cannot tolerate even a second of downtime.\n","date":"2017-11-05T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/using-azure-sql-database-firewall-rules-with-database-projects-visual-studio/","title":"Using Azure SQL Database Firewall Rules with Database Projects (Visual Studio)"},{"content":"Office 365 groups have been a recurring topic of the blog in recent months - we\u0026rsquo;ve seen how we can force Office 365 to use custom domains when creating groups for the very first time and how you can straightforwardly integrate an Office 365 Group within Dynamics 365 for Customer Engagement. With this in mind, there is little point in providing a detailed description of what they are and how they can be used; suffice to say, if you are wanting to collaborate closely with internal/external colleagues for a particular project or department, Office 365 Groups are an excellent candidate to consider.\nOne of the cornerstones of Office 365 Groups is the ability for all conversations to be tracked via the use of a dedicated shared mailbox. This perhaps explains why the Office 365 portal will refuse to let you add any user within your organisation who does not have an Exchange Online license assigned to them. Case in point - let\u0026rsquo;s assume we have a user account with no such license assigned to them on the Office 365 portal:\nWhen attempting to add this user into an Office 365 group, we get a message to let us know No match was found for the user account entered and, as a consequence, it cannot be added to the group:\nFrom this, you can perhaps make the assumption that Office 365 groups are not supported at all for users who do not have a mailbox. This is notwithstanding the fact there are several different business scenarios that may necessitate this requirement:\nA kiosk/\u0026ldquo;light-use\u0026rdquo; account may require access to the group to upload documents and manage the SharePoint site. Integration with external applications may be required, stipulating the need for a service account to authenticate with the group to retrieve/add content dynamically. The need to configure an account for external users to access, that is sufficiently locked down and inexpensive to maintain. Fortunately, as with many other things relating to Office 365, we can get around this limitation within the Office 365 portal by resorting to PowerShell and adding the John Doe user account above to the Group.\nThe first step towards achieving this is to boot up a PowerShell window. Make sure you have access to this on your machine of choice then, after opening the application using the Run as administrator option, execute the following script:\n##Set Execution Policy to Remote Signed - required to fully execute script Set-ExecutionPolicy RemoteSigned ##Connect to Exchange Online. Enter administrator details when prompted. $UserCredential = Get-Credential $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection Import-PSSession $Session ##Add the non-mailbox user to the Office 365 Group. Substitute the Links value with the username of the account to add. Add-UnifiedGroupLinks -Identity \u0026#34;Test Office 365 Group\u0026#34; -LinkType Members -Links john.doe@domain.com ##Confirm that the user has been added successfully by returning the Group member list Get-UnifiedGroupLinks -Identity \u0026#34;Test Office 365 Group\u0026#34; -LinkType Members ##Cleanup by disconnecting from Exchange Online Remove-PSSession $Session The penultimate command will make something similar to the below appear in the console window. Interestingly, note that the John.Doe test user has a RecipientType value of User:\nNow that the user has been added successfully, they will be able to access the SharePoint site for the group by navigating to the SharePoint library URL. This will look similar to the below and can be grabbed by logging in as another user who has the RecipientType value of UserMailbox and navigating to the Groups SharePoint site:\nhttps://.sharepoint.com/sites/\u0026lt;Your Office 365 Group Name/\nNote that this will be on the only way the non-mailbox user can access the site. For example, there will be no link to SharePoint within Office 365 to guide you to the above location. After logging in, you should be greeted with a window similar to the one below:\nThe John Doe \u0026ldquo;light-use\u0026rdquo; account, as referenced above, will have full access to everything that is accessible within SharePoint concerning the Office 365 Group, such as:\nThe Home/News Page Shared Documents Folder (\u0026quot;Documents\u0026quot;) Shared OneNote (\u0026quot;Notebook\u0026quot;) All Site Pages Planner (navigated to via the following link: https://tasks.office.com/\u0026lt;Your Office 365 Primary domain\u0026gt;/en-GB/Home/Planner/) Conversely, the following features will be inaccessible (due to requiring a Mailbox):\nConversations Shared Calendar If for example, you attempt to navigate to Conversations within SharePoint, you will get the following error message:\nThis is, perhaps, a small price to pay for what ends up to be a pretty feature-rich experience that can be given to additional users within your organisation at virtually no cost. Perhaps another good excuse to start rolling out Office 365 Groups across your tenant in the near future 🙂\n","date":"2017-10-29T00:00:00Z","image":"/images/Microsoft365-FI.png","permalink":"/adding-users-with-no-mailbox-to-an-office-365-group/","title":"Adding Users with No Mailbox to an Office 365 Group"},{"content":"Perhaps one of the most fiendish aspects of working with SQL Server Integration Services (SSIS) is the inevitable data transformation/conversion issues that get thrown up, even as part of relatively simplistic Extract, Transform \u0026amp; Load (ETL) packages. It doesn\u0026rsquo;t help as well if, having come from a strictly T-SQL focused background, you are then having to familiarise yourself with the differently named data types that SSIS has in comparison to SQL Server. Ultimately, whether you are still a noobie or season veteran in creating .dtsx packages, you should never be disheartened if you find yourself having to tackle data conversion issues during package development - put another way, there is always going to be a new system or data file format that comes out of nowhere to test your patience 🙂\nI had a rather strange occurrence of this issue recently when working to import Globally Unique Identifier (GUID) data into SQL Server\u0026rsquo;s equivalent data type - the uniqueidentifier. GUIDs are very much the first choice these days if you are building large-scale applications requiring unique values to distinguish database records. Whereas back in the old days, you could get away with an integer column using the IDENTITY seed, the potential for current datasets to contain billions or more records make this option less practical compared with GUID\u0026rsquo;s - a data type that is almost always certainly going to be unique, even you are generating them at an insane pace, and which has the headroom to accommodate huge datasets.\nGoing back to strange occurrence I mentioned above - perhaps the best way to explain the issue (and its resolution) is to show the steps involved. To do this, access to a SQL Server database instance, interfaced with via SQL Server Management Studio (SSMS), is required. Once this has been obtained, a database needs to be created and the following example script executed against it to create the table used during this post:\nCREATE TABLE [GUIDImportTest] ( [UID] UNIQUEIDENTIFIER NOT NULL, [TestCol1] NVARCHAR(MAX) NULL, [TestCol2] NVARCHAR(MAX) NULL ) We then also have our test import file, saved as a .csv file:\nWith both of these ready, we can then get the error to generate using the SQL Server Import and Export Wizard - a handy tool that enables you to straightforwardly move uncomplex data between applications and file formats. This tool can be accessed via SSMS by right-clicking on any database and selecting Tasks -\u0026gt; Import Data\u0026hellip;\nBegin the wizard as indicated above and, when specifying the Data Source settings, select Flat File Source. In the Advanced tab, you should also override the default data type settings for the UID field and set it to unique identifier (DT_GUID):\nThe Target destination (accessed further along the wizard) should be set to SQL Server Native Client and to the server/database where the table created above resides.\nOn the Select Source Tables and Views screen, be sure that the correct table on the Destination drop-down. By default, if your import source does not match the destination name, then the wizard will assume you want to create a brand new table:\nOn the Review Data Type Mapping tab, a data conversion warning be will flagged up for the two TestCol fields; these can be safely disregarded, as the import package will successfully convert these values for you without further complaint:\nAfter clicking Next and letting the package, we can then see the titular error of this post occur, which halts the package execution:\nInitially, I thought the error was generating because the GUID values in the .csv file were not in upper case (when selecting uniqueidentifier data via a SQL query, this is always returned in this format), but the same error is thrown when importing data in this exact format. It turns out the issue was down to something that I should have readily realised based on my experience working with Dynamics CRM/Dynamics 365 for Customer Engagement. When working with URL\u0026rsquo;s and query string parameters in the application involving individual records, GUID values require special URL encoding to convert curly brace values - { and } respectively - into \u0026ldquo;URL friendly\u0026rdquo; format. So for example, the following:\n{06e82887-9afc-4064-abad-f6fb60b8a1f3}\nIs converted into:\n%7B06e82887-9afc-4064-abad-f6fb60b8a1f3%7D\nWhat does this have to do with SSIS and the task at hand? Well, it turns out that when importing uniqueidentifier data types into the application, the application expects the data to be in the above format, surrounded by curly braces. Our source data, therefore, needs to resemble the following image below to import successfully:\nAfter making the appropriate changes to the source data, the package will then execute successfully, loading the data into the desired SQL table:\nI guess the lesson here is that never take for granted any knowledge you may have garnered from a particular source - even when dealing with what may be at first glance a completely disparate challenge. In all likelihood, it just might be that this past experience could present a means of thinking differently about a problem and, ultimately, overcome the challenge you are faced with.\n","date":"2017-10-22T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/resolving-guid-ssis-import-error-the-value-could-not-be-converted-because-of-a-potential-loss-of-data/","title":"Resolving GUID SSIS Import Error \"The value could not be converted because of a potential loss of data\""},{"content":"Dynamics CRM/Dynamics 365 for Customer Engagement (CRM/D365CE) is an incredibly flexible application for the most part. Regardless of how your business operates, you can generally tailor the system to suit your requirements and extend it to your heart\u0026rsquo;s content; often to the point where it is completely unrecognisable from the base application. Notwithstanding this argument, you will come across aspects of the application that are (literally) hard-coded to behave a certain way and cannot be straightforwardly overridden via the application interface. The most recognisable example of this is the Lead Qualification process. You are heavily restricted in how this piece of functionality acts by default but, thankfully, there are ways in which it can be modified if you are comfortable working with C#, JScript and Ribbon development.\nBefore we can start to look at options for tailoring the Lead Qualification process, it is important to understand what occurs during the default action within the application. In developer-speak, this is generally referred to as the QualifyLead message and most typically executes when you click the button below on the Lead form:\nWhen called by default, the following occurs:\nThe Status/Status Reason of the Lead is changed to Qualified, making the record inactive and read-only. A new Opportunity, Contact and Account record is created and populated with (some) of the details entered on the Lead record. For example, the Contact record will have a First Name/Last Name value supplied on the preceding Lead record. You are automatically redirected to the newly created Opportunity record. This is all well and good if you are able to map your existing business processes to the application, but most organisations will typically differ from the applications B2B orientated focus. For example, if you are working within a B2C business process, creating an Account record may not make sense, given that this is typically used to represent a company/organisation. Or, conversely, you may want to jump straight from a Lead to a Quote record. Both of these scenarios would require bespoke development to accommodate currently within CRM/D365CE. This can be broadly categorised into two distinct pieces of work:\nModify the QualifyLead message during its execution to force the desired record creation behaviour. Implement client-side logic to ensure that the user is redirected to the appropriate record after qualification. The remaining sections of this post will demonstrate how you can go about achieving the above requirements in two different ways.\nOur first step is to \u0026ldquo;intercept\u0026rdquo; the QualifyLead message at runtime and inject our own custom business logic instead\nI have seen a few ways that this can be done. One way, demonstrated here by the always helpful Jason Lattimer, involves creating a custom JScript function and a button on the form to execute your desired logic. As part of this code, you can then specify your record creation preferences. A nice and effective solution, but one in its guise above will soon obsolete as a result of the SOAP endpoint deprecation. An alternative way is to instead deploy a simplistic C# plugin class that ensures your custom logic is obeyed across the application, and not just when you are working from within the Lead form (e.g. you could have a custom application that qualifies leads using the SDK). Heres how the code would look in practice:\npublic void Execute(IServiceProvider serviceProvider) { //Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); if (context.MessageName != \u0026#34;QualifyLead\u0026#34;) return; //Get a reference to the Organization service. IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = factory.CreateOrganizationService(context.UserId); //Extract the tracing service for use in debugging sandboxed plug-ins ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace(\u0026#34;Input parameters before:\u0026#34;); foreach (var item in context.InputParameters) { tracingService.Trace(\u0026#34;{0}: {1}\u0026#34;, item.Key, item.Value); } //Modify the below input parameters to suit your requirements. //In this example, only a Contact record will be created context.InputParameters[\u0026#34;CreateContact\u0026#34;] = true; context.InputParameters[\u0026#34;CreateAccount\u0026#34;] = false; context.InputParameters[\u0026#34;CreateOpportunity\u0026#34;] = false; tracingService.Trace(\u0026#34;Input parameters after:\u0026#34;); foreach (var item in context.InputParameters) { tracingService.Trace(\u0026#34;{0}: {1}\u0026#34;, item.Key, item.Value); } } To work correctly, you will need to ensure this is deployed out on the Pre-Operation stage, as by the time the message reaches the Post-Operation stage, you will be too late to modify the QualifyLead message.\nThe next challenge is to handle the redirect to your record of choice after Lead qualification\nJason\u0026rsquo;s code above handles this effectively, with a redirect after the QualifyLead request has completed successfully to the newly created Account (which can be tweaked to redirect to the Contact instead). The downside of the plugin approach is that this functionality is not supported. So, if you choose to disable the creation of an Opportunity record and then press the Qualify Lead button\u0026hellip;nothing will happen. The record will qualify successfully (which you can confirm by refreshing the form) but you will then have to manually navigate to the record(s) that have been created.\nThe only way around this with the plugin approach is to look at implementing a similar solution to the above - a Web API request to retrieve your newly created Contact/Account record and then perform the necessary redirect to your chosen entity form:\nfunction redirectOnQualify() { setTimeout(function(){ var leadID = Xrm.Page.data.entity.getId(); leadID = leadID.replace(\u0026#34;{\u0026#34;, \u0026#34;\u0026#34;); leadID = leadID.replace(\u0026#34;}\u0026#34;, \u0026#34;\u0026#34;); var req = new XMLHttpRequest(); req.open(\u0026#34;GET\u0026#34;, Xrm.Page.context.getClientUrl() + \u0026#34;/api/data/v8.0/leads(\u0026#34; + leadID + \u0026#34;)?$select=_parentaccountid_value,_parentcontactid_value\u0026#34;, true); req.setRequestHeader(\u0026#34;OData-MaxVersion\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;OData-Version\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;Accept\u0026#34;, \u0026#34;application/json\u0026#34;); req.setRequestHeader(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json; charset=utf-8\u0026#34;); req.setRequestHeader(\u0026#34;Prefer\u0026#34;, \u0026#34;odata.include-annotations=\\\u0026#34;OData.Community.Display.V1.FormattedValue\\\u0026#34;\u0026#34;); req.onreadystatechange = function () { if (this.readyState === 4) { req.onreadystatechange = null; if (this.status === 200) { var result = JSON.parse(this.response); //Uncomment based on which record you which to redirect to. //Currently, this will redirect to the newly created Account record var accountID = result[\u0026#34;_parentaccountid_value\u0026#34;]; Xrm.Utility.openEntityForm(\u0026#39;account\u0026#39;, accountID); //var contactID = result[\u0026#34;_parentcontactid_value\u0026#34;]; //Xrm.Utility.openEntityForm(\u0026#39;contact\u0026#39;, contactID); } else { alert(this.statusText); } } }; req.send(); }, 6000); } The code is set to execute the Web API call 6 seconds after the function triggers. This is to ensure adequate time for the QualifyLead request to finish and make the fields we need available for accessing.\nTo deploy out, we use the eternally useful Ribbon Workbench to access the existing Qualify Lead button and add on a custom command that will fire alongside the default one:\nAs this post has hopefully demonstrated, overcoming challenges within CRM/D365CE can often result in different - but no less preferred - approaches to achieve your desired outcome. Let me know in the comments below if you have found any other ways of modifying the default Lead Qualification process within the application.\n","date":"2017-10-15T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/modifying-default-lead-qualification-behaviour-via-c-dynamics-crmdynamics-365-for-customer-engagement/","title":"Modifying Default Lead Qualification Behaviour via C# (Dynamics CRM/Dynamics 365 for Customer Engagement)"},{"content":"Perhaps one of the most useful features at your disposal when working with Azure SQL Databases is the ability to integrate your Azure Active Directory (Azure AD) login accounts, a la Windows Authentication for on-premise SQL Server. There are numerous benefits in shifting away from SQL Server-only user accounts in favour of Azure AD:\nEnsures consistent login identities across multiple services. Can enforce password complexity and refresh rules more easily. Once configured, they behave exactly the same as standard SQL Server only logins. Supports advanced usage scenarios involving Azure AD, such as multi-factor authentication and Single Sign-On (SSO) via Active Directory Federation Services (ADFS). Setup can be completed in a pinch, although you will need to allocate a single/group of user(s) as the Active Directory admin for the Azure SQL Server. You may also choose to take due care and precautions when choosing your Active Directory admin(s); one suggestion would be to use a unique service account for the Active Directory admin, with a strong password, instead of granting such extensive privileges to normal user accounts.\nRegardless of how you go about configuring the feature, I would recommend using it where-ever you can, for both internal purposes and also for anyone who wishes to access your SQL Server from an external directory. This second scenario is, you may be surprised to hear, fully supported. It assumes, first off, that you have added this account to your directory as a Guest/External User account. Then, you just follow the normal steps to get the account created on your Azure SQL Server.\nThere is one major \u0026ldquo;gotcha\u0026rdquo; to bear in mind when doing this. Let\u0026rsquo;s assume that you have added john.smith@domain.co.uk to the Azure AD tenant test.onmicrosoft.com. You then go to setup this account to access a SQL Server instance on the tenant. You will more than likely receive the following error message when using the example syntax below to create the account:\nCREATE USER [john.smith@domain.co.uk] FROM EXTERNAL PROVIDER The issue is, thankfully, simple to understand and fix. When External user accounts are added onto your Active Directory, despite having the same login name that derives from their source directory, they are stored in the new directory with a different UserPrincipalName (UPN). Consider the above example - the UPN in the source directory would be as follows:\njohn.smith@domain.co.uk\nWhereas, as the Azure AD tenant name in this example is test.onmicrosoft.com, the UPN for the object would be:\njohn.smith_domain.co.uk#EXT#@test.onmicrosoft.com\nI assume that this is done to prevent any UPN duplication across Microsoft\u0026rsquo;s no-doubt dizzying array of cloud Active Directory tenants and forests. In any event, knowing this, we can adjust our code above to suit - and successfully create our Database user account:\nCREATE USER [john.smith_domain.co.uk#EXT#@test.onmicrosoft.com] FROM EXTERNAL PROVIDER I guess this is one of those things where having at least a casual awareness of how other technologies within the Microsoft \u0026ldquo;stack\u0026rdquo; work can assist you greatly in troubleshooting what turn out to be simplistic errors in your code. Frustrating all the same, but we can claim knowledge of an obscure piece of Azure AD trivia as our end result 🙂\n","date":"2017-10-08T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/principal-could-not-be-found-or-this-principal-type-is-not-supported-error-azure-sql-server/","title":"\"Principal could not be found or this principal type is not supported\" Error (Azure SQL Server)"},{"content":"I would not recommend setting up a Windows Server Domain Services role for the first time flying blind. Whilst not necessarily classing myself as a \u0026ldquo;newbie\u0026rdquo; in this respect, I have only run through this a few times in the past within lab environments. The process is always tricky - not just in deploying out the role in the first instance, but more via the many quirks that get thrown up as you try to accomplish what should be simple tasks, such as domain joining devices or getting DNS settings correctly mapped out. These and other tiresome rat races can leave you with a severely scratched head and distract you from your ultimate goal.\nFor this reason, you could argue that Azure Active Directory Domain Services (or AADDS) is the perfect solution for \u0026ldquo;newbies\u0026rdquo;. The thing I like the most about it is that a lot of the hassle I make reference to above is something you will never see a sight of, thanks to the fact that Microsoft manages most aspects of the deployment behind the scenes. In addition, the step-by-step guides available on the Microsoft Docs website provide a very clear and no-nonsense holding hand through every step of an Azure Domain Services rollout. What this ultimately means is that you can spend more time on achieving your end goal and reduce the need for extensive administration of the solution following its rollout. Having said that, it is always useful to ensure that you have thoroughly tested any solution as extensively as possible for your particular scenario, as this will always throw up some potential issues or useful tips to remember in future; AADDS is no exception to this rule.\nHaving recently worked closely with AADDS, in this weeks blog post, I wanted to share some of my detailed thoughts regarding the solution in practice and a few things to remember if you are checking it out for the first time.\nResetting AADDS User passwords could become the bane of your existence If you are creating an AADDS resource in isolation to any existing identity providers you have in place (i.e. there is no requirement to use Azure AD Connect with an On-Premise Domain Server), then be aware that you will have to set up a users password twice before they will be able to login to the domain. Microsoft explains better than I can why this is:\nTo authenticate users on the managed domain, Azure Active Directory Domain Services needs credential hashes in a format that\u0026rsquo;s suitable for NTLM and Kerberos authentication. Azure AD does not generate or store credential hashes in the format that\u0026rsquo;s required for NTLM or Kerberos authentication, until you enable Azure Active Directory Domain Services for your tenant. For obvious security reasons, Azure AD also does not store any password credentials in clear-text form. Therefore, Azure AD does not have a way to automatically generate these NTLM or Kerberos credential hashes based on users\u0026rsquo; existing credentials\u0026hellip;If your organization has cloud-only user accounts, all users who need to use Azure Active Directory Domain Services must change their passwords\nSource: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-getting-started-password-sync\nThe above article goes into the required steps that need to be followed for each user account that is created and, at the time of writing, I do not believe there is any way of automating this process. Whether you choose to complete these steps yourself or get your end users to do so instead is up to you, but there is a good chance that if a user is experiencing login issues with an AADDS account, then the steps in the above article have not been followed correctly.\nMake sure you\u0026rsquo;re happy with your chosen DNS Name When first creating your Domain Services resource, you need to be pretty certain your desired DNS domain name will not be subject to change in the future. After some fruitless digging around on the portal and an escalated support request to Microsoft, I was able to confirm that there is no way this can be changed after the Domain Services resource is deployed; your only recourse is to recreate the resource from scratch with your newly desired DNS domain name. This could prove to be problematic if, say, you wish to change the domain name of your in-development domain services account from the default onmicrosoft.com domain to a bespoke one\u0026hellip;after you have already joined Virtual Machines to your new domain : / Some efficient use of the Azure templates feature can save you some aggro here, but not if you have already expended considerable effort on bespoke customisation on each of your VM\u0026rsquo;s operating systems.\nBe aware of what\u0026rsquo;s supported\u0026hellip;.and what isn\u0026rsquo;t There are a few articles that Microsoft have published that can help you to determine whether AADDS is right for your particular scenario:\nHow to decide if Azure AD Domain Services is right for your use-case Azure AD Domain Services Features Third-party software compatible with Azure AD Domain Services Azure Active Directory Domain Services: Frequently Asked Questions (FAQs) Whilst these are invaluable and, admittedly, demonstrate the wide-feature array contained with AADDS, there are still a few hidden \u0026ldquo;gotchas\u0026rdquo; to be aware of. The articles above hint towards some of these:\nOnly one AADDS resource is allowed per Azure tenant. You will need to configure a clean Active Directory tenant (and therefore a separate Azure portal) for any additional AADDS resource you wish to setup, which also requires an appropriate subscription for billing. This could result in ever-growing complexity to your Azure footprint. AADDS is a continually billable service. Unlike VM\u0026rsquo;s, which can be set to Stopped (unallocated**)** status at any time and, therefore, not incur any usage charges, your Domain Services resource will incur fees as soon as you create it and only cease when the resource is deleted. One unsupported feature that the above articles do not provide any hint towards is Managed Service Accounts. Introduced as part of Windows Server 2008 R2, they provide a more streamlined means of managing service accounts for applications running on Windows Server, reducing the requirement to maintain passwords for these accounts and allowing administrators to provide domain-level privileges to essential service account objects. I try to use them whenever I can in conjunction with SQL Server installations, particularly if the service accounts for SQL need to access network-level resources that are secured via a security group or similar and I would encourage you to read up on them further to see if they could be a help within your SQL Server deployments.\nBack to the topic at hand - if you attempt to create a Managed Service Account via PowerShell, you will receive an error message saying that they are not supported within the domain. So, assuming that you are wanting to go ahead and deploy SQL Server on an AADDS joined VM, you would have to revert back to using standard Active Directory user accounts for your Service Accounts to achieve the same functionality. Not great if you also have to enforce password refresh policies, but it would be the only supported workaround in this situation.\nConclusion or Wot I Think When reviewing potential new IT vendors or products, I always try and judge \u0026ldquo;how dirty handed\u0026rdquo; I would need to get with it. What I mean by this is the level of involvement myself, or a business, would need to invest in managing physical server hardware, backend elements of the infrastructure or any aspect of the solution that requires an inordinate amount of time poking around the innards of to troubleshoot basic problems. The great benefit of services such as Azure is that a lot of this pain is taken away by default - for example, there is no need to manage server, firewall and networking hardware at all, as Microsoft does this for you. AADDS goes a step further by removing the need to manage the server aspect of a Domain Services deployment, allowing you to focus more on building out your identities and integrating them within your chosen application. Whilst it does need some work to get it up to an acceptable level of parity with a \u0026ldquo;do-it-yourself\u0026rdquo; Domain Server (for example, extensive PowerShell support for the completion of common tasks), the service is still very much in a developed and user-friendly state to warrant further investigation - particularly if you have a simplified Active Directory Domain in place or are looking to migrate across to Active Directory from another vendor. £80 per month for a directory smaller than 25,000 objects is also not an exorbitant price to pay as well, so I would definitely recommend you check AADDS out to see if it could be a good fit for your organisation/application in the near future.\n","date":"2017-10-01T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/thoughts-and-observations-on-azure-active-directory-domain-services/","title":"Thoughts and Observations on Azure Active Directory Domain Services"},{"content":"In the early days of Office 365, you would be accustomed to a certain kind of experience when purchasing licenses as a small/medium size business (SMB) customer. As these types of organisations are typically too small to warrant the cost for Enterprise Agreements or Volume Licensing, your only recourse to buying Office 365 services was via the portal itself. At this time, any involvement with a Microsoft Partner would be minimal or even non-existent. Partners would only enter the picture if a business opted to grant them Partner of Record status, thereby allowing them to manage aspects of your subscription behind the scenes. This process, while wholly sufficient, did have some notable gaps and, if you were an organisation focused on tightly managing all aspects of your customer journey, could be prone to change or interruption via interference from Microsoft\nThe Cloud Solutions Provider programme (or CSP for short) is Microsoft\u0026rsquo;s \u0026ldquo;next-generation\u0026rdquo; opportunity for partners to get directly involved as part of a customers journey onto Office 365, Dynamics 365 and/or Azure and aims to resolve some of the issues highlighted above. Instead of turning directly to Microsoft when you need a new license subscription or have an issue with a particular Office 365 opportunity, customers would instead deal directly with a CSP provider, who will be able to offer them all of this, and more. Everyone wins - the customer, CSP provider and Microsoft itself - and here\u0026rsquo;s just a few reasons why:\nMoving to CSP can save you money Perhaps the most important reason of all to consider CSP 🙂 The price you will pay for pretty much every single Office 365 Subscription offering available via Microsoft directly will be lower if purchased from a CSP partner. In most cases, this will typically result in a 10-15% reduction across the board guaranteed; a figure which, depending on the size of your organisation, could be a significant portion of your per annum IT spend. In addition, there is no need to wait for your subscription anniversary to switch - any early cancellation charges will be credited in full by Microsoft, should you cancel at any point in your subscription and migrate to the equivalent CSP subscription.\nCSP users can benefit from special promotions, previews and other deals unavailable via Microsoft Direct One example of this at the moment is the preview for Microsoft 365 Business - the next evolutionary step for Office 365 - which is accessible to those who are working with CSP providers currently. Other promotions may also appear from time to time, so you should be speaking to your CSP provider regularly to ensure that they are informing you of any potential discounts or offers available.\nCSP enables your current Microsoft Partner to support you better If you are working with a Microsoft Partner to help support your Office 365 services or Dynamics 365 deployments, there\u0026rsquo;s a good chance that they may have spoken to you about CSP or migrated you across already. The reasons for this will not be purely based on an altruistic desire to reduce your monthly running costs; by having their customers operating under CSP licensing, Partners are granted additional information regarding your subscriptions and their usage. They also become the de facto organisation that needs to be contacted in case any issues occur relating to the subscription. A customer who, for example, contacts Microsoft directly regarding an Exchange Online issue on their CSP subscription will instead be referred back to the CSP Partner in the first instance; they will then be responsible for escalating the case to Microsoft if required. In most circumstances, this can surely be seen as a plus and in helping Partners to work more closely with their customers.\nThe above example also goes some way towards explaining why CSP license prices are cheaper compared to going directly to Microsoft. By placing Partner organisations at the front-line of dealing with common 1st/2nd Line support issues, Microsoft can reduce the number of support professionals it allocates internally and place the burden instead on Microsoft Partners to do the \u0026ldquo;heavy lifting\u0026rdquo;, particularly when it comes to dealing with easy to resolve issues (i.e. any support request that can be resolved via the Administration Centre).\nIt\u0026rsquo;s for Azure as well\u0026hellip;with some caveats Chances are if you are using Office 365 within your organisation, then you will also be consuming some additional Azure services on top of this - either Virtual Machine(s), storage, websites or even some database capacity. The good news is that you can also look at moving your Azure subscriptions across to CSP, with the same benefits available: reduced monthly costs and the ability for your partner of choice to support you better.\nAt the time of writing this post, the key \u0026ldquo;red flag\u0026rdquo; I would draw you towards when considering Azure CSP is what you lose compared to a Pay As You Go or other direct subscription. For example, ongoing and previous usage history will not be visible on the Azure portal and, chances are, you will only get full visibility of your Azure usage costs at the time when you are billed by your CSP partner. If you typically prefer to micro-manage your ongoing Azure usage costs, then a 10-15% saving may not be a fair trade-off for losing this visibility.\nFinally, don\u0026rsquo;t be surprised if this becomes the de facto way of buying Office 365/Azure in the future if you are an SMB I mentioned above one reason why CSP license costs are cheaper, and through this, you can begin to see the writing on the wall for SMB\u0026rsquo;s. This is not necessarily a bad thing. My own personal preference would be in dealing with a Microsoft Partner as opposed to Microsoft direct, as Partners will generally be a lot more flexible and reactive to work with. Having assumed that Microsoft can generate significant internal cost savings and also give eager Partner organisations the opportunity to fill the void, why would they not then turn round and say \u0026ldquo;We\u0026rsquo;re sorry, but if you are an organisation that employs 300 people or less, then please speak to a Microsoft Partner for further assistance.\u0026rdquo;? Certainly, the vibe and talk around CSP at the moment would seem to indicate that this is the long-term trajectory for the programme. Watch this space, but it will be interesting to see in the future whether the Microsoft Direct route is downplayed or removed completely if your potential license order per annum is in only in the hundreds of £\u0026rsquo;s.\n","date":"2017-09-24T00:00:00Z","image":"/images/Microsoft-FI.png","permalink":"/cloud-solutions-provider-csp-programme-deep-dive/","title":"Cloud Solutions Provider (CSP) Programme Deep Dive"},{"content":"When Server-Side Synchronization (Server-Side Sync) was first introduced in Dynamics CRM 2013, I imagine that lots of application and email server administrators breathed a huge sigh of relief. The feature greatly simplified the amount of effort involved in integrating On-Premise/Online CRM instances with their equivalent Exchange Server versions. Previously, the only way of achieving such an integration was via the E-mail Router, a cumbersome application that provided limited integration between email servers and CRM (i.e. no synchronization of items such as Tasks, Appointments, and Contacts). Granted, the E-mail Router was desirable if you were running a non-Exchange based Email Server, but having to provision a dedicated computer/server as the \u0026ldquo;intermediary\u0026rdquo; for all email messages to flow through could start to make a simple application deployment grow arms and legs very quickly.\nSince the addition of Server-Side Sync, the feature has been continually updated to make it more versatile and the de facto choice for getting your emails tagged back into CRM \u0026amp; Dynamics 365 for Enterprise (D365E) - to the extent that the Email Router will shortly be extinct. Server-Side Sync now supports hybrid-deployments (e.g. D365E Online to Exchange On-Premise and vice-versa), has been expanded to include other Email protocol types and is now tailored to provide easier mechanisms for diagnosing mail flow issues from within the application itself. The last of these developments is best epitomised by the introduction of the Server-Side Synchronization Monitoring dashboard, introduced in CRM 2016 Update 1:\nWith this Dashboard, Administrators now have a simplified means of monitoring the health of their Server-Side Sync settings, facilitating the easy identification of problematic mailboxes, mailboxes that have failed a Test \u0026amp; Enable and those that have a recurring error being generated on them. Having all of this information at our fingertips enables CRM administrators to much more proactive in managing their health of their instance.\nWhen recently working within some D365E organizations, which were originally provisioned as Dynamics CRM Online 2015 instances on the same Office 365 tenant, I noticed that the above Dashboard was missing:\nTherefore, when clicking on the appropriate button in the sitemap area, an alternative Dashboard is loaded instead (either the user\u0026rsquo;s favorite Dashboard or a random one instead). So the question was - where has the Dashboard gone?\nIt turns out that the reason for its absence is down to an error as part of a previous major version upgrade, and is an issue that may be encountered by CRM Organisations provisioned a few years back. After escalating to Microsoft Support for further assistance, we were able to find a workaround to make the Dashboard available on the instances that were missing it. The steps involved are relatively straightforward, but in our case, we did have to resort to a spare trial/demo instance available that had the Dashboard installed successfully. The workaround steps are as follows:\nLog into an instance that contains the Dashboard. Go into Customizations and rename the Dashboard (doesn\u0026rsquo;t matter what you call it, as long as it\u0026rsquo;s not the same as the default name). Create a new unmanaged Solution and add in the renamed Dashboard. Export the solution as an Unmanaged Solution. Import the Solution into the instance that is missing it. Attempt to access the Dashboard and verify that it loads successfully. You may be wondering why the Dashboard needs to be renamed before exporting. When attempting to import this Dashboard into any target instance with its default name, the component is automatically skipped during the import process. A Microsoft engineer advised that this is because an instance with the missing Dashboard actually thinks it is there and therefore does not try to import and overwrite a system component with the same name.\nFor the benefit of those who may also be experiencing the same issue and do not readily have access to another CRM/D365E instance, I have uploaded below two unmanaged solution files for the previous two versions of the application:\nVersion 8.1 (Dynamics CRM Online 2016 Service Pack 1)\nVersion 8.2 (December 2016 update for Dynamics 365)\nThe solution is a simple 1 component solution, containing a renamed version of the Server-Side Synchronization Monitoring Dashboard:\nOnce you have downloaded your Solution of choice above, follow steps 3-4 above and then go into Solution and the Dashboard to rename it from Copy of Server-Side Synchronization Monitoring -\u0026gt; Server Side Synchronization Monitoring. The application will accept the changes and it will be as if you always had the Dashboard there in the first place 🙂\nIt seems strange that this issue occurred in the first place, and I suspect I may not be the only one who has faced it recently. The Microsoft engineer I spoke to seemed to confirm that they\u0026rsquo;ve had this type of issue crop up several times previously, which makes it strange that this has not been acknowledged as a bug, either within the application itself as part of the twice-yearly upgrade processes. Notwithstanding this fact, it is good that an established workaround can be applied to fix the issue.\n","date":"2017-09-17T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/how-to-fix-missing-server-side-synchronization-monitoring-dashboard-dynamics-crmdynamics-365-for-enterprise/","title":"How To Fix Missing Server-Side Synchronization Monitoring Dashboard (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"With the dizzying array of cloud-hosted applications and database systems available to IT system administrators today (often deployable at a few button clicks), you may be forgiven for thinking that Microsoft Access has joined the ranks of InfoPath, Visual FoxPro and other semi-legendary deprecated applications. Far from it - Access is still a mainstay within most Office 365 subscriptions today, alongside Word, Excel etc. What\u0026rsquo;s more, if you are looking to develop a very simple application to be utilised within your organisation, you would be hard-pressed to find an equivalent product at the same price point that would do the job as well. Here are just a few reasons on why Access is great.\nIt has the ability to connect to a wide variety of data sources - SQL, SharePoint and others - as well as letting you store data within Access itself. The application contains rich customisation options for forms, buttons and other controls, enabling you to tailor the interface to suit a wide variety of business requirements. Access has full support for Visual Basic for Applications (VBA), giving you the further potential to integrate complex logic when building your Access application. Although there was some concerning news recently regarding Access within SharePoint Online, that looked like the writing on the wall for Access, quite the opposite effect seems to be happening. The application is being continually updated within Microsoft, with one of the latest of these updates catching my attention:\nLast November, we shared our plan to add a set of modern data connectors that will enable Office ProPlus customers to expand what is possible in their organizations.\nToday, we are pleased to announce the addition of two new connectors in our portfolio: Microsoft Dynamics and Salesforce. These two connectors are rolling out to customers with Office 365 ProPlus, E3, or E5 plans.\nTo clarify, Microsoft Dynamics in this context refers to the Dynamics 365 for Customer Engagement (Dynamics CRM) application, and not any other product within the newly revamped Dynamics 365 family. Getting started with the new data connectors is fairly easy, and may be useful for your business to look at more closely. With this in mind, let\u0026rsquo;s take a look at how to setup a straightforward Access application that links with Dynamics 365.\nBefore we begin\u0026hellip; At the time of writing, the new data connectors are only available for those who on the First Release branch of Office ProPlus, something which has to be explicitly enabled on the Office 365 Admin centre. As part of this, you may need to reinstall your Office applications (like I did) to ensure that this kicks in correctly.\nOnce you\u0026rsquo;ve verified that you\u0026rsquo;re on the correct Access version, you can then proceed to create your Dynamics 365 \u0026ldquo;powered\u0026rdquo; Access app\u0026hellip; Open up Access and create a brand new Blank database app called Contact Management.accdb, saving it in a location of your choosing:\nA blank Access app will be created with an empty Table (this can be safely deleted). Navigate to the External Data tab on the Ribbon and select the From Dynamics 365 (online) external data source:\nA pop-up will appear, asking for your Dynamics 365 application URL and how you want to store the data within Access - take a one-time copy of it (Import the source data into a new table in the current database) or connect directly (Link to the data source by creating a linked table). Select the 2nd option, which I would always recommend to select:\nAccess will then attempt to retrieve a list of the Entities within the application. You may also be asked to log in using your Office 365 credentials, but based on my testing, it seemed to automatically pick these up from my currently logged in Office 365 account for activation - which is nice 🙂\nThe Link Tables window will then return a list of the entities that you are able to select. In this example, select Contacts and then press OK.\nAccess will then begin importing the table definitions and the underlying data, which can take a few moments. It is worth noting that Access will also automatically import tables for each N:1 entity relationship for your chosen Entity. This is to allow you to effectively query for \u0026ldquo;friendly name\u0026rdquo; information, as opposed to returning the rather ugly looking Globally Unique Identifier (GUID) values for each relationship:\nWith our data successfully imported, we can then start to build out a Form to enable users to interact with and change records. The quickest way of doing this is via the Form Wizard, which can be found on the Create tab:\nBy following the Wizard\u0026rsquo;s instructions. we can select the fields we want on our new form\u0026hellip;\n\u0026hellip;our preferred layout\u0026hellip;\n\u0026hellip;and then, finally, our form name:\nClicking on Finish will load up our newly created Form. From there, we can run a test by adding a Mobile Number value to the Marissa Burnett sample Contact record and then verify this appears successfully on Dynamics 365 after saving:\nConclusions or Wot I Think The role of Access within the wider context of Microsoft\u0026rsquo;s offerings is one that has been increasingly open to question in recent years. The debut of exciting new solutions such as PowerApps and Dynamics 365 for Business, does make you start to think whether Access will be for the \u0026ldquo;chop\u0026rdquo; in the near future. By adding the new Dynamics 365 for Customer Engagement and Salesforce connectors, along with the other updates that are continually being rolled out for the application, Microsoft makes it clear that for, the time-being, Access is very much here to stay. The reasons for this can be perhaps garnered from my opening comments - it still remains a very versatile way of building quick to deploy, database vendor agnostic solutions that are tailored for desktop use within businesses of any size globally. Another reason for its mainstay - and the release of these new connectors - could be seen as a stealthy means of getting organisations slowly moved across to solutions like Dynamics 365, without the need of moving everything across in one go.\nWhatever the reason(s), we can be encouraged by the fact that Access is very much being actively developed, even within the current landscape of varying CRM/ERP solutions. And, what\u0026rsquo;s more, it\u0026rsquo;s very cool to be able to say that Dynamics 365 for Customer Engagement is Access compatible.\n","date":"2017-09-10T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-started-with-the-dynamics-365-for-customer-engagement-data-connector-microsoft-access/","title":"Getting Started with the Dynamics 365 for Customer Engagement Data Connector (Microsoft Access)"},{"content":"We saw previously on the blog some of the great features that you have at your disposal when using Office 365 Groups in conjunction with Dynamics CRM/Dynamics 365 for Enterprise. When deployed prudently, they can massively enhance what is traditionally possible via distribution groups and give internal/external users a better way to collaborate with content, as opposed to a mountain of email attachments. As I cautiously hinted towards in the above post, some thought should go into how you go about rolling out Office 365 Groups across your organisation, which should inevitably include internal testing. By doing this, you can highlight a particular functionality quirk that you will more than likely need to address before issuing a general rollout.\nWhen you first go to create an Office 365 Group, you will notice that the domain address of the newly created group mailbox has defaulted to the onmicrosoft.com domain for your Office 365 tenant, instead of any bespoke domain you may have setup on your tenant:\nYou would think that this is caused by the fact that the Default Domain for the tenant is still set to the onmicrosoft.com domain, as we can see below:\nHowever, after changing this accordingly, we are still forced to create a Group email address that utilises the onmicrosoft.com domain:\nRather frustrating! Fortunately, there is a way in which we can get around the issue, which involves two steps if you have existing Office 365 Groups that need renaming:\nModify the Email Address Policy for your tenant via PowerShell to force new Groups to use your desired domain. Update the SMTP address of all existing groups via PowerShell. Both of these steps will now be demonstrated, but be sure to have PowerShell installed on your machine first before you begin.\nConnecting to your Office 365 tenant Start your PowerShell client as an Administrator (Run as Administrator option on Windows) and execute the following command first to ensure all subsequent scripts run correctly:\nSet-ExecutionPolicy RemoteSigned You should receive a prompt similar to the below:\nPress Y to proceed\nNow we are ready to connect to Exchange Online, using the following cmdlets:\n$UserCredential = Get-Credential $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection Import-PSSession $Session You\u0026rsquo;ll be prompted to enter credentials to authenticate:\nMake sure the credentials used have the relevant privileges on Office 365 and then hit OK. After a few moments, you should see a window similar to the below, which indicates that you have successfully connected and have all the Exchange Online cmdlets at your disposal:\nSet Default Email Policy for Office 365 Groups Modifying Office 365 to use a different domain when creating Office 365 Groups requires invocation of the New-EmailAddressPolicy cmdlet, a broad brush command that is useful for a number of different Exchange management scenarios. The command can be specifically tailored to create a policy applicable to Office 365 Groups, basically telling your tenant which SMTP domain to use when creating new Groups:\nNew-EmailAddressPolicy -Name Groups -IncludeUnifiedGroupRecipients -EnabledEmailAddressTemplates \u0026#34;SMTP:@mydomain.com\u0026#34; -Priority 1 You can tell if the command has executed successfully if you see something similar to the below in your PowerShell window:\nNext, we can then verify that Office 365 has detected the change by creating a new Group and verifying that the Email address value reflects our desired domain:\nRename Existing Office 365 Group Assuming you are just starting out with Office 365 Groups, the simplest way of renaming your existing groups would be to recreate them. This may not work if they are already being utilised, given the level of effort involved in migrating existing content across to a new group. In this scenario, there is an additional cmdlet we can rely upon to change the SMTP address of our group to our desired domain:\nSet-UnifiedGroup -Identity \u0026#34;Test Office 365 Group\u0026#34; -PrimarySmtpAddress test@mydomain.com PowerShell will only return a message in case of an error, so to verify our changes have taken place, we must again return to Office 365 to confirm that the new SMTP address is in place:\nWith this now updated, email messages should flow successfully to the new Group SMTP address.\nConclusions or Wot I Think It is rather strange that there are no obvious means of specifying which default domain should be used with newly created Office 365 groups, nor at the fact that there is any way for the user to override their choice so they can select a domain that is available on the tenant. As discussed at the start of this post, this situation provides an excellent argument for ensuring that proper processes are followed for the introduction of all new technologies within the business. Whilst exciting and innovative solutions should always be duly considered and not hampered from being rolled out, it is crucial that an appropriate amount of time is allocated for thorough testing. The last thing you want to do, in reference to this situation, is to cause irritation to end users by not giving them the ability to present a professional and correct domain name for their Office 365 Groups; by running through some test scenarios and involving end users as part of this process, where possible, you can help to prevent issues resulting from a roll out and (hopefully!) enthuse colleagues within your organisation at the new piece of technology that is being introduced.\n","date":"2017-09-03T00:00:00Z","image":"/images/Microsoft365-FI.png","permalink":"/removing-onmicrosoft-com-domain-from-office-365-group-mailboxes-powershell/","title":"Removing onmicrosoft.com Domain from Office 365 Group Mailboxes (PowerShell)"},{"content":"Even with the best will in the world, objects that we own or operate will sometimes break down completely. In these occasions, after typically spending an inordinate amount of time attempting to resolve things ourselves, we refer to others who have the expertise and ability to provide a fix. Often, this will come at a price and, depending on the nature of the issue and how it\u0026rsquo;s ultimately resolved, you will walk away happy as Larry or anything but.\nDynamics 365 for Enterprise (D365E) applies very much to the scenario illustrated above. As an application system developed and, in most cases, hosted by Microsoft, you will occasionally come across issues that cause the application to be inoperable or prevent you from carrying out a specific task. In these instances, we generally need to raise our hands and get someone from Microsoft involved to help out. The routes available to do this can vary, meaning you have to consider carefully which option is best for your business.\nIn this week\u0026rsquo;s post, we will take a closer look at the different support offerings that are available to D365E customers, what you get as part of each one and the pros/cons of each offering. If you are currently in the process of evaluating which support option is best for you, then this post will (hopefully!) leave you much better equipped to determine the best option for your /organisation.business\nStandard Support All subscriptions on Office 365 include access to Standard Support, generally amounting to the ability to open support requests on the portal and getting assistance to resolve issues with a particular application/service. D365E is no exception to this rule, and organisations can be comforted in knowing that they are covered from a support perspective the second after they purchase their subscription. However, unless you already have dedicated expertise within your business on how to operate the application, do not expect this service to be an effective hand-holder through your early days with the application. The priority level for Standard Support requests is low and will generally be routed to Microsoft affiliates as opposed to dedicated support professionals within Microsoft itself. Nevertheless, Standard Support does provide you with the ability to get your critical issues with D365E resolved.\nPros Included in your subscription. Guaranteed resolution for all break/fix issues. Cons Responses are only guaranteed within 24 hours of first raising the case. Support provision can generally be lacklustre and cumbersome to deal with. Enhanced Support For smaller businesses, often the cost of obtaining more streamlined support provision for internal applications can be prohibitively expensive. Enhanced Support attempts to try and overcome this by providing a very cost effective means of putting in place a 2-hour SLA response time for any support requests raised involving D365E. This is definitely a huge improvement over what is offered as part of Standard Support. If your business has made a firm commitment not to align yourself with a Partner, then I would strongly recommend looking at Enhanced Support to keep you afloat while using D365E.\nPros 2 hour response time guaranteed for all service requests. Grants access to CustomerSource, an online repository of training resources to help you brush up on your D365E expertise. Cons As a paid offering, each user in your organisation will require the appropriate add-on subscription to ensure compliance. The additional cost (amounting to a few extra £\u0026rsquo;s per month) will, therefore, need to be factored into your monthly budget. Provides in-hour support only, with no guarantee of a response/action outside of normal business hours. Professional Direct Support Professional Direct Support is best geared towards medium to large size organisations or those that require the peace of mind of having speedy responses to any problems. The 1 hour SLA represents the pinnacle response time that Microsoft customers can receive and the offering is also enhanced further via access to a dedicated person within Microsoft who will look after you and ensure your requests are being dealt with effectively. Unlike Enhanced Support, you also have explicit access to 2nd line support professionals within Microsoft, with a commitment towards priority escalation to engineers when a serious problem is identified. Professional Direct Support is the best support offering to go for if you place significant value within your D365E investment and want to align your support provision very closely with Microsoft.\nPros 24x7, 1 hour guaranteed response for all of your issues Access to a Service Delivery Manager within Microsoft, as a point of contact for all support requests and to provide ongoing review of your support experience. Cons Costs an additional £6.80 per month on top of your existing D365E subscriptions Cannot be relied upon to provide in-application support (e.g. entity customisations, plugin development etc.) Working with a Partner Partners are perhaps a natural choice for medium to large size organisations who cannot afford to have the dedicated expertise in-house to manage their D365E deployment, but are looking for a cost-effective way of having this knowledge at their disposal. Dynamics 365 partners are plentiful, and many of them can prove to be a lot less daunting to deal with day-to-day compared with Microsoft directly. They will likely have lots of combined expertise across different areas of the product and will be able to tailor a support offering that suits your requirements more neatly than Microsoft can. The key thing to remember when choosing your partner is to ensure that they have an Advanced Support for Partners (ASfP) or Premier Support for Partners (PSfP) agreement in place with Microsoft. Why is this important? By being enrolled within one of these offerings, the partner has the ability to log support requests relating to your subscription with enhanced routing/SLA\u0026rsquo;s in place, meaning your request will be dealt with faster - in some cases, a 1-hour response is guaranteed for critical issues. The partner will also, arguably, be in a much better position to support you more generally, as both of these schemes afford ample opportunity for the partner to keep up to speed with everything that is happening with D365E.\nPros Excellent resource to have in place for in-application issues (i.e. problems that don\u0026rsquo;t require escalation to Microsoft). Low month-on-month investment, anywhere from £200-£700 or more per month, depending on the size or your organisation. Cons For any issues that require customisation/developer expertise to resolve, expect some punishingly expensive day rates for the work; in some cases, I have seen prices going up to £950 a day for a junior consultant(!!) Does not benefit from any of the above offerings to help you as a business maximise your investment in D365E. You will be reliant solely on your partner of choice to provide this as part of the service (if in fact, they do at all). Conclusions or Wot I Think The myriad of support options presented as part of this post are very much designed to cater for organisations of different sizes, agendas and visions of how they see their D365E system at a strategic level. The list is by no means exhaustive too, as enterprise organisations can look at Premier Support as well. As this kind of support offering would generally involve provision for multiple Microsoft/Microsoft Online Services products, I have deliberately left it out this list, due to it very much being \u0026ldquo;overkill\u0026rdquo; for supporting a single application. What you are left with as part of this list is arguably 3 viable support options that can be recommended depending on which boat you are sitting in:\nIf you are a small business with sufficient technical expertise in-house, then the Enhanced/Professional Direct Support options are best. If you are a larger organisation looking to very closely align yourself with Microsoft and are confident in your in-house technical ability, then Professional Direct Support is the one for you. If you are a business of any size and very much don\u0026rsquo;t want to worry about managing and supporting your D365E system, then the Partner route is a very sensible approach. The implication with all of the above is that the Standard Support option is not one that I would recommend you have in place. Whilst you can be assured that your critical issues will ultimately get looked into and resolved, you may find yourself waiting days and weeks for a resolution and not necessarily be afforded the most technically accomplished support professionals to assist you in resolving your case.\n","date":"2017-08-27T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/choosing-the-best-support-offering-for-your-dynamics-365-for-enterprise-subscription/","title":"Choosing the Best Support Offering for your Dynamics 365 for Enterprise Subscription"},{"content":"In the early days of working with Azure Virtual Machines, there was generally some effort involved in provisioning and maintaining storage for your machines, and a number of considerations that would need to be taken into account. Choosing the most appropriate storage type (Blob, File etc.), its location on the Azure network and the type of resiliency behind the data stored within\u0026hellip;all questions that can leave you muddled and confused! What\u0026rsquo;s more, depending on which type of setup opted for, you could find yourself having to maintain a complex storage solution for what is ultimately a simplified deployment/workload.\nThe introduction of Managed Disks earlier this year is designed to solve these problems and provide a simplistic, easy-to-scale solution that does not require a high-degree of knowledge of Storage Accounts to successfully maintain. Upon creation of your virtual machine, a VHD disk is created for you; after that time, the only management you need to worry about is in specifying the size of the disk, which can be scaled to suit your requirements. Copies of the disk can be quickly created via snapshots as well, and these (and indeed the disks themselves) can be straightforwardly migrated to other Virtual Machines at a few clicks of a button. Definitely a much nicer and compact solution. 🙂\nI was recently working with Managed Disks when deploying out a new Virtual Machine onto Azure. I like to follow a consistent naming convention for resources on Azure, so I was a little frustrated to see that the platform had opted to name the resource for me:\nThe values used are anything but \u0026ldquo;friendly\u0026rdquo; and appear to be the GUID for the object in the backend database! What\u0026rsquo;s that all about?!?\nOne of the things you have to remember with Azure resources is that the names cannot be adjusted once a resource is created; the only recourse in this scenario is to recreate the resource from scratch with the desired name. What\u0026rsquo;s worse is that you have no option as part of the Azure interface when creating your VM to specify a name for your Managed Disk. You only have the single option indicated below - Use managed disks:\nSo is there any way of being able to manually specify a disk name when creating a new Virtual Machine on Azure? If you don\u0026rsquo;t mind working with JSON, deployment templates and the currently in-preview Templates feature, then keep reading to see how to achieve this using an, admittedly, rather convoluted workaround.\nTo begin with, start creating your Virtual Machine as you would normally via the interface. When you get to the final screen (4 - Purchase), click on the Download template and parameters hyperlink that sits next to the Purchase button:\nThis will then open up the entire code-based template for your deployment settings - basically, a large JSON file with all of the settings that you have just selected through the interface. Click on the Add to library button at the very top of the window:\nSelecting this will now enable you save this into your Azure account as a Template, thereby letting you open, modify and deploy it again in future - very handy features if you find yourself deploying similar resource types often. Specify a Name and Description for the Template and then click on Save:\nOnce saved, the Template can then be accessed via the Templates area on Azure, which can be searched and (optionally) pinned to your favourites:\nAfter opening the Template, you then have the option to Edit it - this includes both the Name/Description values already specified and also the ability to modify the JSON file in-browser by selecting the ARM Template setting below:\nScroll down the JSON file until you get to the key/value pairs for osDisk. There should be two pairs existing there already: createOption and managedDisk . There is an additional setting that can be specified in this area to let you define the resource name. No prizes for guessing what this is called 🙂 By adding this key/value pair into the JSON file, as indicated below, we can ensure our preferred name is utilised when the resource is created:\nClick Save on the Edit Template window and then verify that your changes have been accepted by the portal:\nNow, by going back to the first window for the Template, we can deploy the VM and all its constituent components to the platform. One downside with this is that you will need to specify all of the other settings for your Virtual Machine, such as Location, Size, and names, again:\nOnce all your values have been validated and accepted, you can then click on Purchase to submit the deployment to the platform. After this completes successfully, we can then verify that our preferred Managed Disk name has been saved along with the resource:\nIt is nice to know that there is a workaround to enable us to specify a name for Managed Disks in our preferred format, but I would hope in future that functionality is added to the portal that lets you specify the name from within there, instead of making an arbitrary assumption on what the resource is called. Managed Disks are still very much in their infancy within Azure, so I am confident that this improvement will be implemented in future and that the feature is constantly reviewed to ensure the pain of provisioning Virtual Machine storage is almost non-existent.\n","date":"2017-08-20T00:00:00Z","image":"/images/Azure-e1557238846431.png","permalink":"/how-to-manually-name-virtual-machine-managed-disks-on-microsoft-azure/","title":"How to Manually Name Virtual Machine Managed Disks on Microsoft Azure"},{"content":"This is the final post in my 5 part series focusing on the practical implications surrounding the General Data Protection Regulation (GDPR) and how some of the features within Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can be utilised to smooth your organisations transition towards achieving compliance with the regulation. In this week\u0026rsquo;s post, we will be delving deep into the murky world of Subject Access Requests (SAR\u0026rsquo;s) (a process that already exists within existing E.U. Data Protection legislation), some of the changes that GDPR brings into the frame and the capabilities of the Word Template feature within CRM/D365E in expediting these requests as they come through to your organisation.\nAll posts in the series will make frequent reference to the text (or \u0026ldquo;Articles\u0026rdquo;) contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union - a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO\u0026rsquo;s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.\nBefore jumping into the fun stuff, it\u0026rsquo;s useful to first set out the stall of what SAR\u0026rsquo;s are and to highlight some of the areas to watch out for under GDPR\nA SAR is a mechanism through which an individual can request all information that a business or organisation holds on them. Section 7 of the UK\u0026rsquo;s Data Protection Act 1998 sets out the framework for how they operate and they are applicable to a wide variety of contexts - from requesting details from an Internet Service Provider regarding your account through to writing to an ex-employer to request what details of yours they hold on file. The types of information covered under a SAR can be quite broad:\nDocuments containing personal details Emails Call Recordings Database Records The effort involved in satisfying a SAR can be significant, typically due to the amount of information involved, and time will need to be put aside compiling everything together. You will also need to ensure certain types of information are redacted too, to prevent against an inadvertent data breach by revealing other data subjects details. It is for these reasons why SAR\u0026rsquo;s are typically seen as the bane of IT support personnel\u0026rsquo;s existences!\nBe Aware Of The Implications Of Ignoring A SAR\nArticle 12 provides a broad - but nonetheless concerning - consequence should you choose to disregard or not process a SAR within the appropriate timeframes:\nIf the controller does not take action on the request of the data subject, the controller shall inform the data subject without delay and at the latest within one month of receipt of the request of the reasons for not taking action and on the possibility of lodging a complaint with a supervisory authority and seeking a judicial remedy.\nUnder current guidelines issued by the ICO for the Data Protection Act, the type of enforcement action include being mandated to process a SAR via a court order and even compensation for the data subject, if it can be proven that the individual has suffered personal damage through your lack of action. Whilst GDPR makes it unclear at the stage whether these consequences will remain the same or beefed up, organisations can make an assumption that there will be some changes under the new state of play, particuarly given that enforcement actions have been developed significantly in other areas (e.g. data breaches).\nOverall, SAR\u0026rsquo;s remain largely the same under GDPR, but there are a few subtle changes that you should make note of:\nMost organisations currently will charge an \u0026ldquo;administration fee\u0026rdquo; for any SAR that is sent to them. GDPR does not specifically mandate that organisations can levy this charge anymore, so it can be inferred that they must now be completed free of charge. An organisation can, however, charge a \u0026ldquo;reasonable fee\u0026rdquo; if the data subject requests additional copies of the data that has already been sent to them (Article 15) or if requests are deemed to be \u0026ldquo;manifestly unfounded or excessive\u0026rdquo; (Article 12). All information requested as part of an SAR must now be supplied within 1 month (as opposed to 40 days under existing legislation) of the date of the request. This can be extended to a further 2 months, subject to the organisation in question informing the data subject of the extension and the reason for the delay. Delays should only be tolerated in instances where the \u0026ldquo;complexity and number of the requests\u0026rdquo; exceeds normal situations (Article 12). Organisations are within their right to request documentary evidence that the individual who has sent the SAR is the person they claim to be, via official identification or similar. This is useful in two respects: it enables an organisation to mitigate the risk of a potential data breach via a dishonest SAR and also affords the organisation additional time to process the request, as it can be inferred that the request can only be reasonably processed once the individual\u0026rsquo;s identity is confirmed. The ability to expedite SAR\u0026rsquo;s in an efficient and consistent manner becomes a significant concern for organisations who are aiming to achieve GDPR compliance. But if you are using CRM 2016 or later, then this process can be helped along by a feature that any application user can quickly get to grips with - Word Templates\nThis feature, along with Excel Templates, is very much geared towards bridging the gap for power users wanting to generate reports for one or multiple record types, without having to resort to more complex means (i.e. SQL Server Reporting Services reports). I looked at the feature a while back on the blog, and it is very much something I now frequently jump to or advise others to within the application; for the simple reasons that most people will know how to interact with Word/Excel and that they provide a much easier means of accessing core and related entity records for document generation purposes.\nTo best understand how Word Templates can be utilised for SAR\u0026rsquo;s, consider the following scenario: ABC Company Ltd. use D36E as their primary business application system for storing customer information, using the Contact entity within the application. The business receives a SAR that asks for all personal details relating to that person to be sent across via post. The basic requirements of this situation are twofold:\nProduce a professional response to the request that can then be printed onto official company stationary. Quickly generate all field value date for the Contact entity that contain information concerning the data subject. Both requirements are a good fit for Word Templates, which I will hopefully demonstrate right now 😉\nIn true Art Attack style, rather than go through the process of creating a Word Template from scratch (covered by my previous blog post above), \u0026ldquo;here\u0026rsquo;s one I made earlier\u0026rdquo; - a basic, unskinned template that can be uploaded onto CRM/D365E via the Settings -\u0026gt; Templates -\u0026gt; Document Templates area of the application:\nSubject Access Request Demo - Contact\nWhen this is uploaded into the application and run against a sample record, it should look similar to the below:\nOnce deployed, the template can then be re-used across multiple record types, any future SAR\u0026rsquo;s can be satisfied in minutes as opposed to days and (hopefully) the data subject concerned is content that they have received the information requested in a prompt and informative manner.\nThanks for reading and I hope that this post - and the others in the series - have been useful in preparing your for GDPR and in highlighting some excellent functionality contained within CRM/D365E. Be sure to check out the other posts in the series if you haven\u0026rsquo;t done so already using the links below and do please leave a comment if you have any questions 🙂\nPart 1: Utilising Transparent Database Encryption (TDE)\nPart 2: Getting to Grips With Field Security Profiles\nPart 3: Implementing \u0026amp; Documenting A Security Model\nPart 4: Managing Data Retention Policy with Bulk Record Deletion\n","date":"2017-08-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-your-dynamics-crmdynamics-365-for-enterprise-system-gdpr-ready-part-5-managing-subject-access-requests-with-word-templates/","title":"Getting your Dynamics CRM/Dynamics 365 for Enterprise System GDPR Ready - Part 5: Managing Subject Access Requests with Word Templates"},{"content":"Welcome to part 4 of my 5 part series looking at the practical implications surrounding the General Data Protection Regulation (GDPR) in the context of Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E). The series looks at how some of the features within this application can assist you in your journey towards GDPR compliance. This week\u0026rsquo;s post will be jumping across to an arguably underrated aspect of the application - Bulk Record Deletion and how it be used to satisfy your organisation\u0026rsquo;s data retention policy.\nAll posts in the series will make frequent reference to the text (or \u0026ldquo;Articles\u0026rdquo;) contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union - a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO\u0026rsquo;s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.\nAs we get started, here\u0026rsquo;s a question for you: Do you know how long your organisation holds personal data for before it is deleted?\nMost organisations that you speak to may struggle to provide an answer to the above question. The tendency is very much towards holding data for an indefinite period, with this approach typically being borne out of a lack of understanding of legal/contractual requirements, a result of a genuine oversight or as a necessary evil. The problem with any of these justifications is that, as well as falling foul of GDPR, it more than likely also is a contravention of your countries existing data protection legislation. In the UK, for example, Principle 5 of the Data Protection Act 1998 states clearly that \u0026ldquo;Personal data\u0026hellip;shall not be kept for longer than is necessary\u0026hellip;\u0026rdquo;. Despite being quite broad in its interpretation, it can be inferred very clearly that organisations should be aware of how long all of their data is held for and to have the appropriate documentary evidence to support this, via a policy or similar.\nThe existence of this principle demonstrates one of the areas where GDPR does not differ greatly from the Data Protection Act 1998. Article 17 covers all aspects concerning when and how data should be removed, under the broad principle of the \u0026ldquo;right to be forgotten\u0026rdquo;:\nThe data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay where one of the following grounds applies: (a) the personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed; 4.5.2016 L 119/43 Official Journal of the European Union EN (b) the data subject withdraws consent on which the processing is based according to point (a) of Article 6(1), or point (a) of Article 9(2), and where there is no other legal ground for the processing; (c) the data subject objects to the processing pursuant to Article 21(1) and there are no overriding legitimate grounds for the processing, or the data subject objects to the processing pursuant to Article 21(2); (d) the personal data have been unlawfully processed; (e) the personal data have to be erased for compliance with a legal obligation in Union or Member State law to which the controller is subject; (f) the personal data have been collected in relation to the offer of information society services referred to in Article 8(1).\nTo summarise, this means that organisations should remove information pertaining to data subjects when:\nThere is no further requirement to do so, either contractually or legally (i.e. they are no longer required to as part of a statutory instrument) The subject has withdrawn their consent It has been identified that data is being held which is at odds with an organisations policies or primary business activities Article 5 extends this further by making it clear that data which you are unable to keep sufficiently accurate should be \u0026ldquo;erased\u0026hellip;without delay\u0026rdquo;. To avoid this scenario would require the need to regularly contact the data subject concerned to verify their details are correct. One of the major \u0026ldquo;get out of jail free\u0026rdquo; cards that GDPR provides surrounding data retention is in instances where the data will be used as part of \u0026ldquo;archiving purposes in the public interest, scientific or historical research purposes or statistical purposes..\u0026rdquo; (Article 5). The scope of this is, as you can tell, rather limited and most non-governmental organisations/businesses may struggle to demonstrate their data archiving is in line with these broad principals.\nThe importance of ensuring a clearly defined and structured process for the removal of customer data, therefore, becomes a paramount concern under GDPR. Investigating and defining your organization\u0026rsquo;s data retention periods is an exercise that should be carried out if it has not been done so already. Once implemented, we can then turn to a component within CRM/D365E to automate and streamline the actual process - the Bulk Record Deletion feature.\nIn a nutshell, this feature is a really efficient means of deleting large amounts of predefined data within CRM/D365E. Administrators of the application will most often work with them when attempting to reduce the storage footprint of a CRM/D365E instance, via the removal of completed System Job records and other superfluous record types. The ability to define filter criteria, re-occurrence settings and to send out email notifications upon completion of a job, make them an excellent candidate to consider when streamlining your internal processes surrounding data retention.\nFor example, let\u0026rsquo;s assume your business has implemented a data retention policy that states Contact entity data that has not been updated or changed within 12 months should be deleted from the system. Setting up a Bulk Record Deletion Job within the application to assist with this task is remarkably straightforward, as the step-by-step guide below indicates:\nWithin the application, navigate to Settings -\u0026gt; Data Management on the Sitemap and click the icon to navigate to the Data Management page: On the Data Management page, click on the Bulk Record Deletion icon to open the All Bulk Deletion Systems Jobs view. Once this has loaded, click on the New icon: The Bulk Deletion Wizard will open a pop-up window. Click Next on the first screen to move to the Define Search Criteria window. Modify the settings as follows: Look for: Contact Search Criteria: Modified On Older Than 365 Days An example of how this looks can be seen below:\nClick Next when you are ready to navigate to open the Select Options page. Give the Bulk Record Deletion Job a descriptive name and then ensure that the following settings are configured: Specify whether the Job should run immediately or in the future. It is recommended to schedule Jobs out of peak hours to prevent any performance detriment to other users. Ensure that the Run this job after every box is ticked and then select an appropriate time period. I would recommend 30 days. Ensure that the Send an email to me\u0026hellip; box is ticked. You can also (optionally) specify additional email recipients, but note that these have to be valid application users (i.e. not any other email enabled entity such as Contact, Account etc.) The screenshot below indicates how this should look. Click Next when you are ready to proceed:\nThe final step in the wizard gives you the opportunity to review all configured settings. Press Submit to create the Job in the system and, if specified to start immediately, begin running it in the background. You can also navigate to the Recurring Bulk Deletion System Jobs view at any time to review the current status of a job, check to see when it is next scheduled to run or even modify its properties to suit your requirements: The example above is a simplified one but could be extended further in conjunction with other features in the application to suit specific requirements. For example:\nCreate a custom entity to store contractual/statutory data retention limits and link these to your common entities within the application via a 1:N relationship. Once selected when a record is created, you can then define a workflow with a wait condition that updates a Two Option custom field on the entity as a flag for a Bulk Delete Job to remove from the system. Using a custom field on your entity to indicate that a customer has expressed their \u0026ldquo;right to be forgotten\u0026rdquo;, define a workflow that sends a customer confirmation that their details will be removed from the system within 30 days and then use this same field as a flag for a Bulk Record Deletion Job. Define a workflow that sends an email to owners of records that have not been modified within a set period (i.e. are inaccurate), prompting them to speak to the customer to update their details. Records that are not updated would then be deleted, using a Job similar to the one above. Application features, such as the one discussed in this week\u0026rsquo;s post, really start to come into their element when you combine them with other tools found within the application. With this in mind, I would encourage you to roll up your sleeves to see what you can \u0026ldquo;cook\u0026rdquo; up 🙂\nThanks for reading! Be sure to check out the other posts in this series if you haven\u0026rsquo;t already using the links below. Part 5 next week will look at Subject Access Requests and how these can be processed more efficiently using CRM\u0026rsquo;s/D365E\u0026rsquo;s Word Template feature.\nPart 1: Utilising Transparent Database Encryption (TDE)\nPart 2: Getting to Grips With Field Security Profiles\nPart 3: Implementing \u0026amp; Documenting A Security Model\n","date":"2017-08-06T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-your-dynamics-crmdynamics-365-for-enterprise-system-gdpr-ready-part-4-managing-data-retention-policy-with-bulk-record-deletion/","title":"Getting your Dynamics CRM/Dynamics 365 for Enterprise System GDPR Ready - Part 4: Managing Data Retention Policy with Bulk Record Deletion"},{"content":"This is part 3 of a 5 part series, where we take a closer look at the practical implications the General Data Protection Regulation (GDPR) has upon organisations/businesses in Europe and some of the ways Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can assist you as part of the transition. Last week, we saw how Field Security and Field Security Profiles can be utilised to protect sensitive data categories, complementing any existing security model you may have in place. In this week\u0026rsquo;s post, we are going to discuss the concepts that will enable you to utilise CRM\u0026rsquo;s/D365E\u0026rsquo;s security features to their fullest extent, as well as how this can be documented.\nAll posts in the series will make frequent reference to the text (or \u0026ldquo;Articles\u0026rdquo;) contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union - a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO\u0026rsquo;s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.\nBefore we jump in further, let\u0026rsquo;s set the scene by looking at the importance of security and documentation towards achieving GDPR compliance\nArticle 5 of GDPR clearly states that all personal data must be \u0026ldquo;processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing\u0026hellip;using appropriate technical or organisational measures\u0026rdquo;. This principle is embellished further by Article 24, which states:\nTaking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary.\nThe final sentence links in nicely with the requirements for clearly auditable and documented processes under GDPR (more on this shortly). Finally, Article 25 - which is subtitled Data protection by design and by default - places a clear onus on Processors to implement systems that \u0026ldquo;ensure by default personal data are not made accessible\u0026hellip;to an indefinite number of natural persons\u0026rdquo;. In summary, clear thought and effort must be borne out to ensure that application systems not only restrict access to personal data on a \u0026ldquo;need to know\u0026rdquo; basis but also that these systems are reviewed and updated regularly; with, invariably, documentation forming an important bedrock towards this.\nThe need for clear documentation under GDPR is emphasised further over multiple articles in the Regulation:\nIf you are processing data on behalf of a controller, you must only do so based \u0026ldquo;on documented instructions from the controller\u0026rdquo; (Article 28). Organisations can opt to become \u0026ldquo;GDPR accredited\u0026rdquo; to demonstrate compliance with the regulations (Article 24, 25, 28, 32 \u0026amp; Section 5). Such accreditations will likely require sufficient documentary evidence to successfully attain. In situations where data is being transferred \u0026ldquo;to a third country or an international organisation\u0026rdquo;, all \u0026ldquo;suitable safeguards\u0026rdquo; must be clearly documented (Article 30 \u0026amp; 49). All data breaches must be clearly documented (Article 33). To summarise, it can be inferred, but not definitively said, that the documentation of security models and user access to data is a broad requirement to satisfy compliance with the Regulations. By comparison, sufficient organisational security measures, both physical and technical, are mandatory requirements under GDPR.\nWith all this in mind, let\u0026rsquo;s take a look at the four cornerstones of CRM/D365E security and some of the things to think about from a GDPR perspective: Users, Teams, Business Units and Security Roles\nUsers\nThere are no prizes for guessing what this is 🙂 Like with any application system, Users in CRM/D365E are the mechanism through which you log on, interact with and access partial or whole areas of the application. Users utilise the existing identity provider, Active Directory. The benefits of this are that a consistent end user experience can be assured from a login perspective (enhanced further via the implementation of Single Sign On solutions) and there is less management required within CRM/D365E. This is because key information will be synchronised from your Active Directory accounts, such as job title, email address and telephone number. Users begin to come into their element when used in conjunction with the three other \u0026ldquo;cornerstones\u0026rdquo; mentioned above, so will be referenced again shortly.\nKey GDPR Takeaways\nUsers of your CRM/D365E should be reviewed regularly to verify that access is still required to information within the application. As Users do technically contain personal data relating to employees, all sufficient measures should be taken to ensure that the data that is held within them is kept up to date (Article 5). Appropriate organisational security measures should be put in place to ensure Users are protected against malicious access (e.g. scheduled password resets, multi-factor authentication etc.). Teams\nTeams provide a mechanism for grouping together multiple users under a clearly defined label. For example, you could have a Team called Sales Team that has the account manager Users Bob, Alice and Steve as members. There are two types of Teams that can be setup in the application - Owner Teams, which operate much the same way as a Users (e.g. records can be assigned to them) and Access Teams, which provide specific permissions/access to records. More information about both types can be found on this useful MSDN article.\nKey GDPR Takeaways\nStructuring Teams correctly in conjunction with Security Roles can provide a more streamlined means of managing appropriate levels of access for teams, departments or other groups within an organisation. This is due to the fact that Security Roles can be assigned to Owner Team records, similar to Users. Access Teams require a much higher degree of ongoing management, as you will need to constantly review their membership to verify that only approved Users are members. Reports can be quickly generated for records that are owned by a Team and/or which Users are part of a particular Access Team record via the applications Advanced Find feature. This can assist greatly in satisfying any ongoing documentation requirements. Business Units\nGetting to grips with how Business Units operate can be one of the major challenges when first learning about CRM/D365E. They provide a means of segregating data within your instance so that only Users that are part of a particular \u0026ldquo;unit\u0026rdquo; can interact with the records that most directly concern them. Business Units can be best understood and utilised when thinking about your organisation in the following terms:\nBusiness Departments Subsidiaries/Parent Companies Regions Taking the third of these examples, you could, therefore, look at having a \u0026ldquo;root\u0026rdquo; Business Unit, with \u0026ldquo;child\u0026rdquo; Units for each region that your organisation operates within. Users can then be moved into the appropriate Business Unit for their locality and, as a consequence, only have access to Account records that are situated within their location. Business Units are anything but an exhaustive subject, so I would strongly recommend reading up on the topic separately to gain a fuller understanding of what they are.\nKey GDPR Takeaways\nBusiness Units provide an effective means of satisfying Article 5\u0026rsquo;s requirements for data protection \u0026ldquo;by design and by default\u0026rdquo;. Remember that Users may still be able to see records that do not exist in their current Business Unit if they have been assigned a security role that gives them Parent:Child or Organization privilege on the entity in question (more on this in the next section). Each Business Unit will also have a corresponding Team created for it. These can be utilised to segregate out security permissions in a more centralised manner, as discussed above. Security Roles\nThe most important cornerstone of security within your CRM/D365E instance and the \u0026ldquo;glue\u0026rdquo; that holds all other components together, Security Roles define the permissions for every feature and entity within the application, giving you the opportunity to fine tune access privileges on a granular basis. For example, you can grant a user permission to read all records within their current Business Unit, but only allow them to modify records that they directly own. Privileges are structured very much in line with how Business Units operate, with each individual permission (Read, Create etc.) having the following \u0026ldquo;levels\u0026rdquo; of access:\nNo Access User Level - Can only perform the specified action on records owned by the User. Business Unit Level - Can only perform the specified action on records within the same Business Unit as the current User. Parent:Child Business Unit Level - Can only perform the specified action on records within the same or all child Business Units as the current User. Organization Level - Action can be performed against any record on the system. The potential is limitless with Security Roles and, if mastered correctly, they can satisfy a lot of the problems that GDPR may bring to the table.\nKey GDPR Takeaways\nMicrosoft provides a number of default Security Roles out of the box with the application and it may be tempting to utilise these directly instead of modifying or creating new ones specific to your needs. I would caution against this, particularly given that the roles may end up having excessive privilege levels on certain record types and could, by implication, fall foul of several articles within GDPR. Similar to how Teams can be used to represent teams or departments within an organisation, Security Roles can be best utilised when they are broadly structured to provide the minimum level of privileges needed for several Users or more. This can also reduce any a headache when it comes to documentation of these roles as well. New versions of the application (which come out twice each year) generally introduce new functionality and - as a result - new permissions required to successfully utilise them. Assuming you are updating your application in line with Microsofts recommended approach, these opportunities can be the best time to review your existing security roles, to verify that they are current and do not contain incorrect privileges. Quickly Generating Documentation of your Security Model\nTo assist you in gaining a \u0026ldquo;bird\u0026rsquo;s eye\u0026rdquo; view of your users and their access privileges, the application provides a means of achieving this - the User Summary report:\nThis report has been tucked away inside the application from many years, a fact that can be attested to below with its rather archaic look. Regretfully, it hasn\u0026rsquo;t received any love or attention as part of recent updates 🙁\nHaving said that, the report does have some nice features:\nIt can be configured to run on a specific Business Unit, thereby providing a more closely defined list of the Users/Security Roles. Can be exported to PDF, Excel and other common file formats. Provides full information about each User, including their job title (make sure you are populating this field on your Active Directory first to ensure this appears!). If you have never run the report before, then I would strongly recommend that you check it out to determine whether it satisfies your documentation requirements around GDPR.\nHopefully, this post has given you a good flavour of what can be achieved within the application to fully build out a suitable security model within CRM/D365E. In next week\u0026rsquo;s post, we\u0026rsquo;ll look more carefully at the implications GDPR has surrounding data retention and how the Bulk Delete feature can be configured to automate this process. In the meantime, be sure to check out the other posts in the series if you haven\u0026rsquo;t already using the links below:\nPart 1: Utilising Transparent Database Encryption (TDE)\nPart 2: Getting to Grips With Field Security Profiles\n","date":"2017-07-30T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-your-dynamics-crmdynamics-365-for-enterprise-system-gdpr-ready-part-3-implementing-documenting-a-security-model/","title":"Getting your Dynamics CRM/Dynamics 365 for Enterprise System GDPR Ready - Part 3: Implementing \u0026 Documenting A Security Model"},{"content":"This is part 2 of a 5 part series, where we take a closer look at the practical implications the General Data Protection Regulation (GDPR) will have upon your organisation and some of the ways Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can assist you as part of the transition. Last week, we took a look at the database encryption feature within the application and why you should devote some time to understanding how it works. The primary focus of this weeks post is how an organisation can ensure that highly sensitive data categories are only made accessible to authorised individuals only.\nAll posts in the series will make frequent reference to the text (or \u0026ldquo;Articles\u0026rdquo;) contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union - a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO\u0026rsquo;s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.\nIntroduction - Sensitive Data Categories, their meaning and practical implications\nWe saw as part of last week\u0026rsquo;s post the importance encryption plays as a \u0026ldquo;reasonable\u0026rdquo; step that any well-established organisation should have implemented to safeguard themselves against the risk of a data breach. The implications of a data breach are covered more in-depth under Articles 33, 34 and 35 of the regulation. The key takeaway from this is that encryption is by no means a silver bullet, and you must instead look at a complementary range of solutions to mitigate the risk and impact of a data breach.\nAlthough not technically a form of encryption, Field Level Security can be seen as an apparatus for providing encryption-like functionality on a very granular basis within your CRM/D365E deployment. Whilst implementing them does broadly conform to the specifications as set out in Article 32 of GDPR, they do also provide a means of satisfying some of the requirements set out in Article 9, which states clearly:\nProcessing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person\u0026rsquo;s sex life or sexual orientation shall be prohibited.\nUnless one of the following conditions apply:\nThe data subject has provided consent to record the data or has placed the details into the public domain. The data needs to be processed as part of a specific line of legitimate business (employment, social security, social protection law, not-for-profit foundation/association, medical care, public health purposes or as part of scientific/historical research). The recording of such personal details is required to protect the vital interests of the person concerned. Many of the organisations listed above may already be using CRM/D365E as their primary business system and, as a consequence, will be storing the types of information referenced above. Whilst this is surely a legitimate case of data processing, issues may arise, for example, when it comes to which persons within the organisation can see and access this data; a medical doctor/nurse accessing a patient\u0026rsquo;s health information is appropriate, but surely a receptionist or IT support personnel viewing a patient record has no fair interest in viewing this information. Having appropriate controls in place to protect against these types of scenarios become a primary concern under GDPR, and Article 30 enshrines this further by requiring organisations to clearly document and implement processes that define individuals access to personal data:\nEach controller and, where applicable, the controller\u0026rsquo;s representative, shall maintain a record of processing activities under its responsibility\u0026hellip;[including] the categories of recipients to whom the personal data have been or will be disclosed including recipients in third countries or international organisations;\nTo summarise, therefore, by piggybacking upon the very robust security model contained within CRM/D365E, Field Level Security can very quickly be implemented to ensure that users of the system only see the information that is relevant to them as part of their role, without disrupting the entire end-user experience in the process.\nWith this in mind, let\u0026rsquo;s take a look at how straightforward it is to begin working with Field Security, by following the steps outlined below:\nIdentify the field(s) that need to be secured from being accessed by a specific group of users. Navigate to the field(s) properties and verify that the Field Security option has been set to Enable. For this example, we are going to use the Primary Contact field on the Account entity: Within the Customizations area of the application, select the Field Security Profiles option on the left-hand bar and then click on New to create a Field Security Profile: On the New Field Security Profile window, specify a name and an (optional) description value for the new profile and press the Save button: Once saved, you can then begin to configure the two most important aspects of the profile - the permissions that are granted to secured fields and the Users/Teams in the application that they apply to. In this example, we are going to restrict the Primary Contact field from step 1) so that users who are part of our Account Executive team role cannot view, update or create a record with a value in this field. To begin with, click on the Teams button and then click on Add to find and select the Account Executive team role: Next, click on the Field Permissions icon and double-click the Primary Contact field on the list. Verify that the Allow Read, Allow Update and Allow Create options are set to No: Now, when we log into the application as a user who is a part of the Account Executive role and navigate to a sample record on the system, we can see that the field in question has been obfuscated. We have no way of seeing, changing or otherwise interacting with the value contained within this field:\nFields that are impacted in some way as a result of Field Security can always be clearly distinguished by the key icon on the top left of the field name. This can prove useful in helping users to understand their current levels of access and in troubleshooting why a user cannot read or modify a particular field.\nSo what have we learned about Field Security Profiles and how they conform to GDPR? Here\u0026rsquo;s a quick summary of the key points:\nDemonstrates that sensitive data information is stored with \u0026ldquo;appropriate security\u0026rdquo; in place (Article 5) They can be used as a tool for storing and controlling access to sensitive data types (Article 9) Provides a mechanism to demonstrate compliance with the relevant articles of GDPR, should the organisation be subject to an Audit as a Data Processor (Article 28) Can be seen as an appropriate technical safeguard in the protection of both non-sensitive and sensitive data types (Article 32) Could be used as documentary evidence (or the basis thereof) that covers the documentation requirements for data processing (Article 30) Thanks for reading! As part of next\u0026rsquo;s week post, we will take a deeper dive into CRM/D365E\u0026rsquo;s wider security model and the importance of documentation in the context of GDPR.\n","date":"2017-07-23T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-your-dynamics-crmdynamics-365-for-enterprise-system-gdpr-ready-part-2-getting-to-grips-with-field-security-profiles/","title":"Getting your Dynamics CRM/Dynamics 365 for Enterprise System GDPR Ready - Part 2: Getting to Grips With Field Security Profiles"},{"content":"Monday may not have been my day of choice for attending an all-day session on the General Data Protection Regulation (GDPR), but it was something that I walked away from feeling more well-informed on:\nInteresting and informative day learning about GDPR. Fun fact: Organisations can no longer charge for SAR\u0026#39;s May 25th 2018 onwards.\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) July 10, 2017 If you currently work within the IT industry, then I would be very surprised if you have not yet come across GDPR or are already in the process of assessing what your organisation needs to do to prepare for it. In a nutshell, GDPR replaces existing data protection legislation within EU countries on May 25th 2018 (for the UK, this will be the Data Protection Act 1998). GDPR brings data protection guidelines firmly into the 21st century and provides a framework for organisations to apply the appropriate steps to protect individuals data. Whilst there is much within the updated guidelines that remain unchanged, there is additional emphasis towards organisations implementing the appropriate levels of security (both physical and technical), applying regular auditing processes and documentation of processes to protect against a possible data breach. For an IT professional, one of the overriding questions you should be starting to ask yourself is \u0026ldquo;What can I do to make the systems I support/implement compatible with GDPR?\u0026rdquo;\nDynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) is one system that is likely to be in place within businesses/organisations across the EU, and one that is arguably best placed to help meet the challenges that GDPR brings to the table. The wide berth of functionality within the application can be picked up and adapted to suit the following requirements:\nProvide backend database encryption, to protect your key customer data in the event of a data breach. Ensure that highly sensitive data categories are only accessed by relevant personnel within your organisation. Enables you to implement a clear and comprehensive security model within your system, that can then clearly documented. Helps you to implement a data retention policy that is line with contractual and statutory requirements. Allow you to quickly and effectively respond to subject access requests, via the use of easy to generate document templates. All of the above can be achieved using out of the box functionality within the application and, in some cases, can be more straightforwardly than you may assume.\nAs part of this and the next couple of week\u0026rsquo;s blog posts, I will take a look at each of the bullet points above, step by step. The aim of this is to highlight the specific elements within GDPR that each potential situation covers, how to go about implementing a solution within CRM/D365E to address each one and to provide other thoughts/considerations to better prepare yourself for GDPR. By doing so, I hope to make you aware of functionality within the application that hitherto you may never have looked at before and to explore specific use cases that provide a wider business relevance.\nAll posts in the series will make frequent reference to the text contained within Regulation (EU) 2016/679, available online as part of the Official Journal of the European Union - a particularly onerous and long-winded document. If you are based in the UK, you may find solace instead by reading through the ICO\u0026rsquo;s rather excellent Overview of the General Data Protection Regulation (GDPR) pages, where further clarification on key aspects of the regulation can be garnered.\nWithout further ado, let\u0026rsquo;s jump into the focus for this weeks post: Understanding and effectively utilising Transparent Database Encryption (TDE) within your CRM/D365E deployment.\nOne area within GDPR that has changed significantly is data breaches and penalties for organisations that have demonstrated a clear dereliction of their responsibilities. When assessing whether a fine is issued by your countries appropriate authority, which could number in the millions of £\u0026rsquo;s or more, a determination is made whether the company has implemented sufficient technical controls to mitigate the potential impact of a data breach. Article 32 sets this out in broad terms:\nTaking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the [Data] controller and the [Data] processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate: (a) the pseudonymisation and encryption of personal data; (b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services; (c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident; (d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.\nIt is worth noting that an assessment will be made of your businesses size, turnover etc. when a judgement is made on what \u0026ldquo;appropriate\u0026rdquo; steps your organisation has taken to mitigate their risk in this regard. Smaller businesses can, therefore, breathe a sigh of relief in not having to implement large scale and costly technical solutions within their businesses. Speaking more generally though, the importance of encryption within your organisation\u0026rsquo;s database and application systems becomes a primary concern in demonstrating GDPR compliance. It could also help you when it comes to determining whether you need to report a Data Breach, as an encrypted piece of hardware does not necessarily expose personal data; arguably meaning that no data breach has occurred.\nCRM/D365E gives us the option to utilise a well-established feature within SQL Server to implement encryption for our data - Transparent Database Encryption (or TDE). Even better, it\u0026rsquo;s enabled by default. That being said, it is prudent for you to take a copy of the default encryption key or change it entirely if you haven\u0026rsquo;t done so already.\nDoing either of the above is relatively straightforward. Navigate to Settings -\u0026gt; Data Management within the application and then click on the Data Encryption icon:\nThe Data Encryption pop-up window will appear, as indicated below:\nFrom here you have two options at your disposal:\nUse the Show Encryption Key to allow you to copy and paste the key to your location of choice. Note that as outlined by Microsoft, the key may contain Unicode characters, leading to a potential a loss of data when using applications such as Notepad. Generate a new key that meets the requirements set out above and then click on Change. In both cases, ensure that the encryption key is stored securely and segregated as far away as possible from your CRM/D365E deployment. Keep in mind as well that there are specific privileges that control if a user can access the above or even modify the encryption key in the first place. These privileges can be found on the Core Records tab within a Security Role page:\nIt may be tempting, knowing that encryption is enabled by default, to put your feet up and not worry about it. Here\u0026rsquo;s why it\u0026rsquo;s important to securely hold/segregate your database encryption key and also to think carefully about which users in your organisation have full Administrative privileges on the application:\nLet\u0026rsquo;s assume the following scenario: your on-premise CRM 2016 organisation has database encryption enabled and SQL Server is installed on the same machine, along with all database files. The database encryption key is saved within a .txt file on the same computer.\nA rogue member of staff with full Administrative privileges on CRM or an attacker manages to gain access to this server, in the process taking your CRM organisations .mdf database file. They also manage to either take a copy of the .txt file containing the encryption key or the currently configured encryption key by accessing your CRM instance. This person now has the ability to both mount and access the database file without issue. Under GDPR, this would constitute a data breach, requiring your business to do the following as immediate steps:\nNotify the supervisory body within your country within 72 hours of the breach occurring (Article 33) Notify every person whose personal data was stored in the database that a breach has occurred (Article 34) Record the nature of the breach, the actual effect caused by it and all remedial steps taken to prevent the occurrence of a breach again in the future. All of this may be required by the supervisory body at any time (Article 35) The fun does not stop there: depending on what processes your business had in place and, given the specific nature of the scenario, a fine may be more than likely. This is due to the clear steps that could have been taken to prevent the database from being so easily accessible. Having to explain this in front of senior executives of a business is not a prospect that any of us would particularly relish and could have been avoided had the following steps being implemented:\nThe rogue member of staff had been given a much more restrictive security role, that did not grant the Manage Data Encryption key - Read privilege. The SQL Server instance had been installed on a different server. The database encryption key had been saved on a different server The database encryption key had been saved in a password protected/encrypted file. This list is by no means exhaustive, and there is ultimately no silver bullet when it comes to situations like this; however, you can manage your risk much more effectively and demonstrate to authorities like the ICO that you have taken reasonable steps by taking some of the appropriate steps highlighted above.\nIn next week\u0026rsquo;s post, we will take a look at the importance of Field Security Profiles and how they can be utilised to satisfy several of the key requirements of GDPR in a pinch!\n","date":"2017-07-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-your-dynamics-crmdynamics-365-for-enterprise-system-gdpr-ready-part-1-utilising-transparent-database-encryption-tde/","title":"Getting your Dynamics CRM/Dynamics 365 for Enterprise System GDPR Ready - Part 1: Utilising Transparent Database Encryption (TDE)"},{"content":"I was recently involved in deploying my first ever Office 365 Group. I already had a good theoretical understanding of them, thanks to the curriculum for the Business Applications MCSA, but I had not yet seen how they perform in action. The best way of summing them up is that they are, in effect, a distribution group on steroids. As well as getting a shared mailbox that can be used for all communications relating to the group\u0026rsquo;s purpose, they also support the following features:\nShared Calendar SharePoint Document Site Shared OneNote document Shared Planner In a nutshell, they can be seen as an excellent vehicle for bringing together the diverse range of features available as part of your Office 365 subscription. What helps further is that they are tightly integrated as part of the tools that you likely already use each day - for example, they can be accessed and worked with from the Outlook desktop client on and Web Access (OWA) portal.\nGiven that this feature is a very Office 365 centric component, the natural question emerges as to why an exam for Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) would want to test your knowledge of them. Since the release of Dynamics CRM 2016 Update 1, you now have the option of integrating Office 365 Groups with the application, to provide a mechanism for easily working with groups from within the CRM/D365E web interface, effectively providing a \u0026ldquo;bridge\u0026rdquo; for non-CRM/D365E users who are using Office 365.\nYou may be pleased to hear that the steps involved in getting setup with Office 365 Groups in CRM/D365E are remarkably straightforward. Here\u0026rsquo;s a step-by-step guide on how to get up and running with this feature within your business:\nMicrosoft provides a managed solution that contains everything you need to get going with Office 365 Groups, and this is made available as a Preferred Solution. These are installed from the Dynamics 365 Administration Center by navigating to your instance, selecting the little pen icon next to Solutions and clicking on the Office 365 Groups record on the list that is displayed:\nClick on the Install button and then accept the Terms of Service - as Office 365 Groups creates an intrinsic link between your CRM/D365E and Office 365 tenant, it is only natural that data will need to be shared between both, so there are no major concerns in accepting this:\nThe solution will take a couple of minutes to install, and you can safely refresh the window to monitor progress. Once installed, the Settings sitemap area will be updated with a new button - Office 365 Groups:\nClicking into this will navigate you to the Office 365 Groups Integration Settings page, which allows you start configuring the entities you wish to use to utilise with Office 365 Groups:\nFor reference purposes, the default out of the box entities that can be used with this feature are as follows:\nAccount Competitor Contact Contract Case Invoice Lead Opportunity Product Quote Sales Literature You may be wondering if it is possible to enable additional entities for use with Office 365 Groups. At the time of writing, only the system entities recorded above and custom entities can be used with Office 365 Groups.\nNow that we know how to get CRM/D365E setup for Office 365 Groups, let\u0026rsquo;s look at how it works when set up for the Account entity:\nGoing back to the Office 365 Groups Integration Settings (if you have closed it down), click on the Add entity button to enable a drop-down control, containing a list of the entities referenced above. Select Account and, when you are ready to proceed, click Publish all to enable this entity for Office 365 Groups functionality:\nFor this example, the Auto Create button is left blank. I would recommend that this setting is always used, so as to prevent the creation of unnecessary Office 365 Groups, that may get named incorrectly as a consequence (you\u0026rsquo;ll see why this has the potential to occur in a few moments).\nOnce enabled, when you navigate to an existing Account record, you will see a new icon on the Related Records sitemap area:\nAfter clicking on this, you are then asked to either Create a new group - with the ability to specify its name - or to Search for an existing group. The second option is particularly handy if you have already been using Office 365 Groups and wish to retroactively tie these back to CRM/D365E:\nFor this example, we are going to create a new group. The process can take a while (as indicated below) so now may be a good opportunity to go make a brew 🙂\nLeaving the screen open will eventually force a refresh, at which point your new group will appear, with all the different options at your disposal:\nWith your group now up and running, you can start uploading documents, configure the shared calendar and fine-tune the group\u0026rsquo;s settings to suit your purposes. Here are some handy tips to bear in mind when using the group with CRM/D365E:\nJust because the group is linked up with CRM/D365E doesn\u0026rsquo;t mean that you have to be a user from this application to access the group. This is one of the great things about utilising Office 365 Groups with CRM/D365E, as standard Office 365 users can join and work with the group without issue. The only thing you have to remember is that the Office 365 user has to have the appropriate license on Office 365 - as indicated by Microsoft, any subscription that gives a user an Exchange Online mailbox and SharePoint Online access will suffice. Remember that the Conversations, Notebook and Documents features are not in any way linked with the equivalent CRM/D365E feature. For example, any Conversation threads will not appear within the Social Pane as an activity; you will need to navigate to the Office 365 Group page to view these. Utilising Office 365 Groups as an end-user requires that you have the appropriate security role access. If you do not, then you may be greeted with the following when attempting to open an Office 365 Group within the application: That\u0026rsquo;s right - a whole heap of nothing! 😁 To fix this, you will need to go into the users Security Role and ensure that they have Organization-level privilege on the ISV Extensions privilege, as indicated below:\nConclusions or Wot I Think\nOffice 365 Groups present a natural choice when working as part of large-scale teams or projects - especially when they are internally based. They can also be a good fit for when you wish to liaise with 3rd party organisations, thanks to the ability to grant Guest access to external accounts. Having the ability to then tie these groups back within CRM/D365E is useful, but I do wonder whether they are a good match for all of the record types that Microsoft suggests in the list above. Certainly, Account records are a justifiable fit if you are working with an organisation to deliver continuous services or multiple projects. I doubt highly, however, that you want to go to the trouble of creating a shared document repository for a new Lead record right from the bat - particularly if your CRM/D365E deployment is more focused towards B2C selling. You may be tempted to over-excitedly roll out Office 365 Groups carte blanche across your CRM/D365E deployment, but I would caution against this. Don\u0026rsquo;t forget that the creation of a new Office 365 Group will result in additional overhead when managing your Exchange Online mailbox lists and SharePoint sites, as well as having long-term storage implications for the latter. Acting prudently, you can identify a good business case for enabling specific entities for use with Office 365 Groups and ensure that you manage your entire Office 365 deployment in the most effective manner possible.\n","date":"2017-07-09T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-started-with-office-365-groups-dynamics-crmdynamics-365-for-enterprise/","title":"Getting Started with Office 365 Groups (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"The ability to incorporate document management functionality within Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) is one of the ways that the application integrates neatly with other products in the Microsoft \u0026ldquo;stack\u0026rdquo;, giving you the ability to drive further benefit from your existing CRM/D365E deployment. Documents that specifically concern a particular record can be stored within SharePoint and accessed via a few clicks within the application, allowing for quicker collaboration and visibility behind a specific contact, business or sales opportunity. By leveraging the full functionality of SharePoint alongside this, businesses can begin to take advantage of features such as document history, check in/check out capability and the ability to access SharePoint content via the OneDrive desktop client, negating the need to work solely within a web browser to access your documents.\nWhen you are first getting to grips with Document Management, you may come across an oddity with no apparent way of resolving: GUID names are appended onto the names of your SharePoint folders:\nThere is an understandable reason why this is done (which will be discussed later on in this post), but the folder names can appear jarring and nonsensical to end users of the application. Fortunately, there is a way in which you can change the global setting for this within the application, but it requires making some modifications to your CRM/D365E Organisations global settings - one of those changes that are simple to do if you know how to, but more complex if you don\u0026rsquo;t 🙂\nCRM\u0026rsquo;s/D365E\u0026rsquo;s Organisation settings are not exposed within an accessible format natively within the application. To make changes to these settings within the application, the best tool available for both Online and On-Premise versions of the application is the Organization Settings Editor. I have previously discussed how to install and use this tool on the blog, and it is a really straightforward way of updating some of the CRM\u0026rsquo;s/D365E\u0026rsquo;s more obscure settings to suit your preferences. The setting that controls all of this is this is the aptly named CreateSPFoldersUsingNameandGuid, which needs to be set to false. Once this is configured, all newly created SharePoint folders will no longer have the GUID appended to them.\nHow to Fix Existing Folders It may be the case that you have lots of existing SharePoint folders that are configured in the default manner, but you want to look at \u0026ldquo;tidying\u0026rdquo; them up. Simply renaming the SharePoint folder will not work, as CRM/D365E stores \u0026ldquo;pointers\u0026rdquo; to each individual folder setup for Document Management, containing a partial URL link; a link that will become broken if anything is modified within SharePoint. To correctly fix your folders and not break anything, you will need to do the following:\nLocate the Document Location record within CRM/D365E and modify the Relative URL value to remove the GUID value (e.g. the Document Location record for Test Co5_B3D5C8DFDB77E51181023863BB357C38 should be updated to have a Relative URL value of Test Co5). You may receive a warning that the URL does not exist, which can be safely disregarded. Rename the folder in SharePoint to match the updated Relative URL value from above. This could potentially be a laborious task, depending on the number of Document Locations involved. To ease you in your journey, I would recommend digging out the CRM/D365E SDK \u0026amp; the .NET Client API for SharePoint to iteratively update all Document Locations and ditto for all folders within SharePoint online, based on a .csv/spreadsheet input.\nHaving the ability to make your SharePoint folders look more visually appealing is nice, but keep in mind the following\u0026hellip;\nBy storing a GUID within each folder name, the application can always ensure that the correct folder can be linked back to a CRM record. By implication, this also facilitates situations where duplicate record names are allowed within your environment (for example, you could have two Account records both called Test Company Ltd). By overriding the above setting within the application, you lose the ability to support duplicate folder names; instead, the application will resolve back to any existing folder that matches the Name of the record, essentially becoming a document repository for 2 records as opposed to 1.\nAn example can better demonstrate this. Below is a test Account record that has been setup for Document Management - with a few test documents saved within and a Document folder successfully configured in SharePoint called My Test Account:\nWhen we then go and create a brand new Account record with the same name - My Test Account - and go to configure Document Management, we are asked to confirm the creation of a Folder with the same name as the one already setup:\nOnce confirmed, instead of seeing a blank folder, all of the test documents set up on the similarly named record are visible - because the application is pointing to the folder above:\nIt is highly unlikely that you would allow duplicate record names in the first place within your CRM/D365E instance (particularly for entity types such as Account), so this may be a concern that requires little notice. However, be sure to perform a full analysis of the all the record types that you are hoping to use with Document Management, as there is no way to fine tune the global setting to accommodate specific entities - it is all or nothing.\nConclusions or Wot I Think Keeping things looking tidy and non-technical are endeavours that need to drive any successful software deployment. End users who exposed to strange looking, \u0026ldquo;techie\u0026rdquo; stuff as part of working with a system day in day out may soon start to lose faith in a system, hampering user adoption and faith that it is fit for purpose. Removing GUID\u0026rsquo;s from a SharePoint folder can obviously present some serious technical problems, depending on the nature of your deployment; but it is arguably something that you should consider if you want to ensure that end users feel that their SharePoint folders are \u0026ldquo;correctly\u0026rdquo; named (i.e. do not contain superfluous information). As with anything in IT, proper testing must be carried out for all possible scenarios before rolling out a feature change like this and, assuming your business ticks all of the boxes for safely removing GUID\u0026rsquo;s from your SharePoint folders, it is definitely something to be considered seriously.\n","date":"2017-07-02T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/preventing-folder-names-with-guids-in-sharepoint-dynamics-crmdynamics-365-for-enterprise/","title":"Preventing Folder Names With GUID's in SharePoint (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"If you are British and a keen Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) fan, then May 4th, 2017 was a proud day to be both. This is because Microsoft announced the general availability of UK hosted D365E instances. UK-based Office 365 customers who configure a new D365E subscription will have their instance(s) hosted within the UK, using the brand new crm11 URL identifier. This is line with the launch of 2 Azure Data Centre regions within the UK last year and represents an important development in the evolution of Microsoft\u0026rsquo;s cloud services within the UK.\nFor the \u0026ldquo;old fogies\u0026rdquo; (like me!), previously all CRM/D365E tenants in the UK were hosted by default within the Europe, Middle East and Africa (EMEA) region. The data centres for these locations are hosted within Amsterdam and Dublin, areas that are within the European Union and somewhat local to the UK. Those with tenants within this region may now be asking whether it is possible to move their existing instances to the UK. The reasons for this may be straightforward, complex or oddly patriotic in nature - ranging from data residency requirements through to the desire of being a proud Brit with a wholly British D365E instance 🙂 In this week\u0026rsquo;s post, we\u0026rsquo;ll take a look at whether it is possible to currently migrate an existing non-UK based tenant into the UK via the various options at our disposal.\nFor those who do not wish to read ahead, here\u0026rsquo;s the TL;DR version\u0026hellip;\nNo, it is not currently possible to migrate your CRM/D365E instance to the UK – either via a support request to Microsoft or via the Backup/Restore feature within the Administration Centre.\nIf you have decided to stick around, then let\u0026rsquo;s take a closer look at the two avenues potentially at our disposal for migrating a CRM/D365E instance: A Microsoft Support Request and the Backup/Restore Feature.\nAsking Microsoft Directly: What did they say? Microsoft supports the ability for businesses to a move their CRM/D365E instance to an entirely different region. This is all handily summarised as part of a TechNet article:\nThe Geo Migration feature for Dynamics 365 (online) will allow customers to move their instances in a single tenant from one region to another.\nI was recently involved as part of a support request with Microsoft, in which we asked the question whether it is possible to migrate an EMEA D365E instance to the UK. We were told that it is not currently possible to do this, with the following response given:\nThe data residency option, and the availability to move customer data into the new region, is not a default for every new region we launch. As we expand into new regions in the future, we\u0026rsquo;ll evaluate the availability and the conditions of data moves on a region by region basis.\nNo specific timeframes on when (if ever) this type of Geo Migration would be made available when this was queried with Microsoft. However, I would posit its likeliness for the near future, given that you can currently request the very same for your Office 365 tenant. At the time of writing (June 2017), however, the Geo Migration option is not viable for migrating your CRM/D365E instance across to the UK.\nBackup and Restore: The Saviour of the Hour? The ability to backup and restore your CRM/D365E Online instance(s) was a much welcome feature when it was introduced last year. Microsoft frequently performs backups of your instance(s), to provide sufficient disaster recovery potential on their end. These backups are exposed to Administrators via the Administration Centre, as well as the ability to generate your own backups at any time. These can then be restored into a spare Sandbox instance for testing/development. The key thing to remember with this feature is that it is limited to Online backup/restore only. You cannot, for example, download a copy of the SQL Server backup file and then use this as part of an On-Premise install.\nYou may be thinking at this stage this feature would allow us to get around the above issue by taking a backup of your non-crm11 existing organisation and restoring to your crm11 instance. Unfortunately, this is not possible, either via the Administration Centre or through logging a ticket with Microsoft. This is very likely due to the fact that different regions have different Administration Centres, with all of the functionality within that (backup/restore, administrative settings and update management) contained for instances within that specific region.\nThis can be confirmed by taking a look at the URL. When you are within the EMEA region, for example, the URL looks like this:\nhttps://port.crm4.dynamics.com/G/Instances/InstancePicker.aspx\nWhereas for the Great Britain (GBR) region its:\nhttps://port.crm11.dynamics.com/G/Instances/InstancePicker.aspx\nBusinesses that have multiple instances across different regions are classed as \u0026ldquo;multi-tenanted\u0026rdquo; by Microsoft, and you have the ability to switch your region and access desired functionality from within the Administration Centre:\nGoing back to the above point re. functionality being \u0026ldquo;containerised\u0026rdquo;, this can be confirmed by accessing the properties of a crm11 instance whilst within the crm4 Administration Centre:\nOur options to Edit, Reset, Copy etc. the instance are non-existent; to make them appear, you have to open the Administration Centre for the GBR region.\nSo why is all of this so important? When speaking with customers regarding new IT Projects, the question of data location is one that is almost always raised. Whereas in the past, organisations would choose to host their entire IT infrastructure on-premise, the rise of cloud computing has meant it is generally more cost-effective to migrate infrastructure into a data centre or a SaaS provider, such as Microsoft. Whilst these solutions generally tick all of the boxes from a resiliency, performance etc. standpoint, the awkward tick box - the actual, physical location of the data itself - is more difficult to extrapolate. An organisation such as Microsoft, for example, has data centres all over the globe, leading to the potential of data leaving particular jurisdictions and some potentially controversial realities, such as the implication of the US Patriot Act on data that Microsoft holds. As a business, you may be required by local law to ensure that data is stored within the current jurisdiction, within the European Economic Area (EEA) and even be required to literally identify the server rack on which a particular database/system is hosted on. With this in mind, one barrier for adoption of CRM/D365E in the UK could be the question over data residency. There are also other performance considerations that arise from the location of your CRM/D365E instance. It is entirely feasible that connections to a UK hosted tenant will be faster compared to connecting to a tenant based in Europe, the US or anywhere else in the world. Administrators who may be attempting to identify ways in which their instance can run optimally may, therefore, benefit from having their CRM/D365E instance hosted in the UK.\nFinding an Interim Workaround For businesses who have strict, data residency requirements that could have serious legal implications, the situation is currently not ideal if they currently use CRM/D365E. In terms of finding a workaround until Microsoft support geo-to-geo migrations for UK regions, customisation components can be straightforwardly migrated via solutions, but other instance-specific settings/data could take a significant amount of time to move across. There are tools out there that can help in this:\nUse KingswaySoft\u0026rsquo;s Dynamics 365 SSIS Integration Toolkit to create a .dtsx package to migrate all of the required data to a newly provisioned instance. Utilise Scribe Online to configure a D365E to D365E one-time replication job. Migrate the data using the applications built-in tools and Configuration Migration Tool, available within the SDK. There is, fortunately, a way forward for the desperate, but the steps involved need to be carefully planned out in advance to ensure that no issues are encountered. For example, if you have recurring Workflows in your old environment, how do you migrate these across and ensure they are restarted correctly? I would envision that Microsoft will very soon offer the ability to migrate into the UK region, so it may end up being more prudent to hold off until this is announced and generally available.\n","date":"2017-06-25T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/migrating-existing-dynamics-crm-onlinedynamics-365-for-enterprises-instances-to-the-uk-is-this-possible/","title":"Migrating Existing Dynamics CRM Online/Dynamics 365 for Enterprises Instances to the UK: Is This Possible?"},{"content":"The rise in virtualisation within IT has profoundly altered the landscape of how servers and applications are deployed across the globe. Whereas in the past you would need to rely on multiple, physical hardware servers, you can now achieve similar levels of performance via the use of a single hardware server with virtualised versions of your operating systems deployed onto it. This is, in a nutshell, how \u0026ldquo;the cloud\u0026rdquo; effectively operates and even organisations who are not yet using a cloud computing environment may have virtualisation occurring somewhere within their infrastructure. The number of vendors who now offer solutions within this space grows every year, and more longstanding software providers have significantly adjusted their traditional offerings to take virtualisation into account.\nMicrosoft is one of these organisations, having moved more with the tide in recent years - to the extent that you can virtualise additional Windows operating systems on top of a physical hardware server at no additional cost. For those who are a bit shaky on terminology, like me, these are referred to as \u0026ldquo;Hyper-V Guest Operating Systems\u0026rdquo; or just \u0026ldquo;guests\u0026rdquo;. The situation as it stands currently means that the two editions of Windows Server 2016 (which can be upgraded to rather straightforwardly, you may be pleased to hear) offer very clear guest usage rights - up to 2 for Standard and unlimited for the Datacenter edition. This can offer numerous benefits for your business:\nFacilitates the quick deployment of additional virtual machine instances for additional production workloads, testing or development. Enables you to drive maximum benefit from your hardware investment by utilising as much virtual compute power as required. Greatly simplifies the process of activating your copies of Windows, as guest OS\u0026rsquo;s will be activated based on your host machine license key. As with anything, there is a small learning curve involved in deploying a solution of this nature, but those who are reasonably comfortable working with Windows Server should find the journey relatively smooth.\nI had a strange issue recently when setting up a lab environment, where I could not activate any guest Windows Server Datacenter Edition servers via the standard way (i.e. through the Settings page). As a result, I was getting the following errors/messages displayed constantly when working on the machine:\nClicking on the Change product key option does nothing, and you have no apparent way of forcing Windows to activate or to modify the Product Key supplied at installation. So how do you go about activating your copy of Windows?\nIt turns out this error is due to the Product Key being entered incorrectly on initial setup. I (incorrectly) assumed that Hyper-V guest OS\u0026rsquo;s on a Windows Server 2016 Datacenter host did not require a Product Key and would automatically activate upon installation. This assumption was made on the basis that the Windows 2016 installation wizard now lets you skip adding a Product Key. So, instead of entering a key, I simply pressed Next and let the install proceed as normal. The correct course of action should have been to supply the appropriate Automatic Virtual Machine Activation (AVMA) key, which essentially \u0026ldquo;tells\u0026rdquo; Windows to look at the host operating system to confirm whether it is licensed accordingly. A full list of the AVMA codes for Windows Server 2012 R2 and Windows Server 2016 can be found on TechNet, with the appropriate license keys for Server 2016 reproduced below:\nDatacenter: TMJ3Y-NTRTM-FJYXT-T22BY-CWG3J Standard: C3RCX-M6NRP-6CXC9-TW2F2-4RHYD Essentials: B4YNW-62DX9-W8V6M-82649-MHBKQ So, to avoid having to completely re-install Windows, we need to find some way in which we can update the OS to use the correct the license key. Fortunately, there is a way, if you are prepared to \u0026ldquo;get dirty\u0026rdquo; with the command prompt 🙂\nFirst of all, you will need to ensure that your Hyper-V Manager is configured for Guest services. This can be done straightforwardly enough by going into Settings\u0026hellip; on your virtual machine and verifying the appropriate box is ticked on the Integration Services tab. This will need to be done for each virtual machine that requires activation:\nNext, because we are prevented from modifying the Product Key via the GUI interface, we must resort to using the slmgr command to remove the current key and add the appropriate AVMA key. As an interesting aside, this is actually a Visual Basic script file and ranks for me as one of the most surprising instances I\u0026rsquo;ve found of Visual Basic being used within a Microsoft product!\nLog onto the virtual machine in question and open an elevated command prompt, executing the following command to remove the current license key:\nslmgr -upk After a few seconds, a pop-up message box will confirm that this has completed successfully:\nThen, run the following command to install your AVMA key. In the example below, we are using the Datacenter key to activate a Datacenter edition guest:\nslmgr -ipk TMJ3Y-NTRTM-FJYXT-T22BY-CWG3J As above, we will get confirmation back that the script has completed successfully:\nStraight away, you should notice that the Activate Windows message vanishes from the screen and this is confirmed by going into settings - happy days! 🙂\nAlthough it is a little bit frustrating that there is no way in which the product key can be modified from within the Settings page (I am assuming that this is a bug; let me know in the comments below if you have encountered the same problem), it is good that we have an alternative mechanism in place for ensuring that Windows can be activated correctly. The lesson is well learned: when deploying out a new guest Hyper-V Server instance, be sure to enter the correct AVMA key at setup to avoid any of the rigmarole illustrated above.\n","date":"2017-06-18T00:00:00Z","image":"/images/WindowsServer-FI.png","permalink":"/resolving-we-cant-activate-windows-on-this-device-right-now-error-on-windows-server-2016-hyper-v-guest/","title":"Resolving \"We can't activate Windows on this device right now\" Error on Windows Server 2016 Hyper-V Guest"},{"content":"When working with applications day in, day out, you sometimes overlook something that is sitting there, staring at you in the face. It may be an important feature or an inconsequential piece of functionality, but you never really take the time to fully understand either way just what it is and whether it can offer any distinct benefits or assistance. I realised a great example of this when recently deploying some new Plug-ins into Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E). When you are setting up a new Step for your Plug-in, you are given the option of specifying an Unsecure Configuration and Secure Configuration via a multi-line text box to the right of the window:\nI was curious about just what these are and why it is not something that you ever really come across when you are first learning about Plug-in development with the application. I took a closer look at what these text boxes do and, as part of this week\u0026rsquo;s blog post, I wanted to share my findings and provide a demonstration of how they work in practice.\nThe Theoretical Bit: Unsecure/Secure Configuration Overview\nTypically, when we want to get some juicy information relating to a piece of CRM/D365E functionality, we would turn to our good friends TechNet or MSDN. In this instance, however, there is no dedicated page that covers this topic in-depth. We must instead navigate to the Write a Plug-in Constructor page to find dedicated information about how these work:\nThe Microsoft Dynamics 365 platform supports an optional plug-in constructor that accepts either one or two string parameters. If you write a constructor like this, you can pass any strings of information to the plug-in at run time.\nThese \u0026ldquo;one or two\u0026rdquo; parameters are the multi-line text boxes indicated above. Information is exposed as string objects within you C# code and you enable this feature within your code by specifying the following, SDK adapted constructor within your Plug-in class:\npublic MyPlugin(string unsecureString, string secureString) { if (String.IsNullOrWhiteSpace(unsecureString) || String.IsNullOrWhiteSpace(secureString)) { throw new InvalidPluginExecutionException(\u0026#34;Unsecure and secure strings are required for this plugin to execute.\u0026#34;); } _unsecureString = unsecureString; _secureString = secureString; } As with anything, there are a number of important caveats to bear in mind with this feature. These can be gleaned via additional online sources:\nSecure configuration parameters can only be viewed by CRM/D365E Administrators, whereas unsecure can be viewed by any user of the application. As highlighted on the above MSDN page, Secure Configuration text is not passed to any Plug-in executing offline via the CRM/D365E for Outlook client. Secure configuration parameters are not transportable between environments via a solution export/import. In terms of use cases, the above articles highlight some potential scenarios that they are best utilised within. Perhaps the best example is for an ISV solution that requires integration with external web services to retrieve data that is then consumed by CRM/D365E. Credentials for these web services can be stored securely when the Plug-in is deployed via the use of Secure configuration parameters. Other than that, if you are developing a Plug-in for internal use, that is unlikely to be deployed/managed across multiple environments, then it is probably not worthwhile to look at utilising configuration parameters when you can just as easily specify these within your code.\nPractice Makes (for) Perfect (Understanding)!\nThe best way to see how something works is by getting hands-on and seeing how it works in action. Let\u0026rsquo;s assume you wish to deploy a plugin that executes whenever a record is opened/viewed by any user across the platform. The plugin should update the First Name (firstname) and Last Name (lastname) fields to match the value(s) in the Unsecure and Secure Configuration properties accordingly. The below plugin code will achieve these requirements:\nusing System; using Microsoft.Xrm.Sdk; namespace D365.BlogDemoAssets.Plugins { public class PostContactRetrieve_PluginConfigurationTest : IPlugin { private readonly string _unsecureString; private readonly string _secureString; public PostContactRetrieve_PluginConfigurationTest(string unsecureString, string secureString) { if (String.IsNullOrWhiteSpace(unsecureString) || String.IsNullOrWhiteSpace(secureString)) { throw new InvalidPluginExecutionException(\u0026#34;Unsecure and secure strings are required for this plugin to execute.\u0026#34;); } _unsecureString = unsecureString; _secureString = secureString; } public void Execute(IServiceProvider serviceProvider) { // Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); // Obtain the organization service reference. IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId); // The InputParameters collection contains all the data passed in the message request. if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;) \u0026amp;\u0026amp; context.InputParameters[\u0026#34;Target\u0026#34;] is EntityReference) { Entity contact = new Entity(\u0026#34;contact\u0026#34;, ((EntityReference)context.InputParameters[\u0026#34;Target\u0026#34;]).Id); contact[\u0026#34;firstname\u0026#34;] = _unsecureString; contact[\u0026#34;lastname\u0026#34;] = _secureString; service.Update(contact); } } } } When deploying the plug-in using the Plugin Registration Tool, we specify the step to execute on the Retrieve message and to execute in the Pre-Operation Stage (otherwise the form will need to be refreshed to see the updated values!). We also need to specify our desired values for the First Name and Last Name fields in the appropriate Configuration fields. The Register New Step window should look similar to the below if configured correctly:\nWhen we navigate into the Jim Glynn (sample) Contact record within CRM/D365E, we can see that the Plug-in has triggered successfully and updated the fields to match against the values specified on Step above:\nWe can also confirm that the appropriate error is thrown when one of the configuration properties is missing a value, by modifying our Plug-in step and attempting to reload our sample Contact record:\nBy clicking Download Log File, we can view the error message specified as part of the InvalidPluginExecutionException call. Below is a modified excerpt of the ErrorDetails XML that is generated:\n\u0026lt;InnerFault\u0026gt; \u0026lt;ActivityId\u0026gt;ed4a2021-9c87-4f06-a493-6d804676bf96\u0026lt;/ActivityId\u0026gt; \u0026lt;ErrorCode\u0026gt;-2147220891\u0026lt;/ErrorCode\u0026gt; \u0026lt;ErrorDetails xmlns:d3p1=\u0026#34;http://schemas.datacontract.org/2004/07/System.Collections.Generic\u0026#34; /\u0026gt; \u0026lt;Message\u0026gt;Unsecure and secure strings are required for this plugin to execute.\u0026lt;/Message\u0026gt; \u0026lt;ExceptionSource i:nil=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;InnerFault i:nil=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;OriginalException i:nil=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;TraceText i:nil=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/InnerFault\u0026gt; Conclusions or Wot I Think\nIt is impossible to become what I would like to term a \u0026ldquo;pub quiz champion\u0026rdquo; in CRM/D365E; what I mean by this is that I would defy anyone to rattle off every little detail and fact about the entire platform. As with any pub-quiz, those that do may more than likely end up cheating by having their phone out. With this metaphor in mind, I think Plug-in configuration properties would be an excellent topic for a quiz of this nature. As mentioned previously, it is not something that I was ever made aware of when starting to learn about Plug-in development and is not a feature touted regularly within the online community. Perhaps this is because of its very specific and limited application - although it is handy to have at our disposal, I think its usage is really only targeted towards those who are developing solutions that are deployed across multiple environments AND require the need to store configuration properties for external URL\u0026rsquo;s/web services in a compact and secure manner. Therefore, if you are currently having to use a custom entity within the application to store this type of information, it would make sense to reduce the footprint of your solution within the application itself and make the appropriate changes to use Secure configuration parameters instead. Using a bit of ingenuity (such as XML configuration parameters), you can achieve the same requirements without the need to customise the application unnecessarily.\n","date":"2017-06-11T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/what-is-unsecuresecure-configuration-on-a-dynamics-crm365-for-enterprise-plugin/","title":"What is Unsecure/Secure Configuration on a Dynamics CRM/365 for Enterprise Plugin?"},{"content":"Generally, when you are looking at adopting Dynamics CRM/Dynamics 365 for Enterprise (D365E) within your business, you can be reasonably satisfied that the majority of what is already configured within the system can be very quickly adapted to suit your business needs. Whether it\u0026rsquo;s the Lead to Opportunity sales process or the entire Case management module, the functionality at your disposal is suitable for many organisations across the globe. The great thing as well is that, should you wish to fine-tune things further, you have a broad range of options at your disposal that can help you achieve your objectives - sometimes in very specific and highly unique ways. I have previously looked at a good example of this on the blog - namely, how to override the systems built-in pricing engine in favour of your own - and, assuming you have a good understanding of C# and how to deploy plugins to the application, you can spin an important aspect of the systems functionality on its head to match how your business operates.\nHaving this ability is, undoubtedly, a real boon, but can present some odd behaviours. For example, you may start to notice that suddenly the Extended Amount field is no longer being populated with data after implementing your custom pricing engine. The example pictures below demonstrate a before and after example of adding a Product line item to the Quote entity, using the exact same sample Product:\nThe odd thing about this is that, as soon as you click into the record, you will suddenly see a value appear in this field. Very strange!\nIt is difficult to pinpoint exactly what is causing the problem, but I can do a \u0026ldquo;stab in the dark\u0026rdquo;. CRM/D365E uses the CalculatePrice message to determine the points when either a) the default price engine or b) a custom one is triggered to perform all necessary calculations. Although there is no official documentation to back this up, I suspect that this message is only triggered when you Update or Retrieve an existing Product line item record (regardless of whether it is an Opportunity Product, Quote Product etc.). This is proven by the fact that, as soon as we click into our Product record, the Extended Amount field is suddenly populated - the platform has triggered the Retrieve message as a result of you opening the record and then, as a next step, forces the CalculatePrice message to also fire. The important thing to clarify with this point is that you must have a custom pricing implemented successfully within the application for this to work. Otherwise, don\u0026rsquo;t be too surprised if the Extended Amount value remains at 0.\nWhilst the workaround for this is somewhat tolerable if you are working with a small subset of records and do not rely on the Extended Amount as part of any existing reporting within the application, this could really start to cause problems for your end users in the long term and give an impression that the application does not \u0026ldquo;work\u0026rdquo; as it should do. Fortunately, there is a solution that we can look at implementing that will hopefully lead to some happy fingers from not needing to click into records anymore 🙂 Be sure to have the CRM/D365E SDK handy before you begin the below!\nOpen up the Plugin Registration Tool from within the SDK, and log into your CRM/D365E instance. Scroll down to your Assembly and Plugin that contains your custom pricing engine. If already configured correctly, it should have a step configured for the CalculatePrice message on any entity, as a Synchronous, Post-Operation step. Right click your plugin and click on Register New Step to open the window that lets you specify the required settings for your step. Populate the form as follows: Message: Create Primary Entity: Select one of the Product line item entities that your custom pricing engine uses. The list of accepted entities are invoicedetail, opportunityproduct, salesorderdetail or quotedetail. Event Pipeline Stage of Execution: Post-Operation Execution Mode: Synchronous All other settings can be left as default. Your window should look similar to the below if configured correctly for the quotedetail entity:\nClick on Register New Step to add the step to the application. Repeat steps 3-4 for any additional Product line item entities that are using with your custom pricing engine Now, when you go back into CRM/D36E, the Extended Amount values will start to be populated automatically as soon as you add a new Product onto the Product line item subgrid.\nConclusions or Wot I Think\nWhilst the ability to override an important piece of CRM\u0026rsquo;s/D365E\u0026rsquo;s functionality is welcome, you do need to bear in mind the additional overhead and responsibility this leaves your organisation in ensuring that your custom pricing engine is correct and that you have adequately tested the solution to properly identify actions which are out of the ordinary, such as the one discussed in this post. What is slightly frustrating about this quirk, in particular, is the lack of clear documentation regarding the CalculatePrice message from Microsoft. Granted, the message is only exposed for minimal interaction from an SDK point of view and is, for all intents and purposes, an internal application message that we shouldn\u0026rsquo;t really mess with or care about. Having said this, even just a brief summary of when the message is triggered on the platform would have made it instantly more understandable why any custom pricing calculation engine will fail to provide you with an instant amount within your Extended Amount field. In the end, however, I am pleased that there is a straightforward workaround that can be put into place to ensure that things work as expected; hopefully to the extent that it becomes virtually impossible to determine easily whether your organisation is using the default or a custom pricing engine in the first place.\n","date":"2017-06-04T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/automatically-populate-extended-amount-field-when-using-custom-pricing-dynamics-crm365-for-enterprise/","title":"Automatically Populate Extended Amount Field When Using Custom Pricing (Dynamics CRM/365 for Enterprise)"},{"content":"The biggest headache when managing any database system is enforcing data quality and consistency across the entire dataset. This can range from ensuring that field values are entered correctly through to enforcing rules to prevent duplicate records from even touching the periphery of your database. If you are using an application like CRM Online/Dynamics 365 for Enterprise (D365E), then the built-in functionality within these systems can assist with this, via the use of Option Sets for field values and handy features such as Duplicate Detection Rules. If you have developed your own system, then all due thought should be directed towards ensuring that your system can adequality enforce data quality at the client side, wherever possible. Doing this early on can save you a lot of work down the line.\nIf the backend database system you are using is SQL Server, then, fortunately, the SQL standard (and, specifically, Transact-SQL or T-SQL, Microsoft\u0026rsquo;s implementation of the standard) has some tricks up its sleeve that is worth noting. CONSTRAINT\u0026rsquo;s are database objects that can be setup to enforce\u0026hellip;well, constraints on the values that are allowed within a particular field. There are a number of different CONSTRAINT\u0026rsquo;s that are at our disposal, but the most commonly used ones are:\nPRIMARY KEY CONSTRAINT: Typically used to indicate a column that will always contain a unique value in the database, to facilitate working with a specific row of data in a table. Any field type can be setup as a PRIMARY KEY, but it is generally recommended to use an Integer (INT) or Globally Unique Identifier field (UNIQUEIDENTIFIER), more generally referred to as a GUID. FOREIGN KEY CONSTRAINT: FOREIGN KEY\u0026rsquo;s are used to indicate a field within a database that is a reference to another table within the database. Values entered into this field must match against a record within the related table and you can configure a wide range of options of how \u0026ldquo;parent\u0026rdquo; record behaves, based on how the related record is modified or removed in the database. Those coming from a CRM/D365E background can grasp this better by realising that these are essentially lookup fields as part of a one-to-many (1:N) relationship. DEFAULT CONSTRAINT: On some occasions, when a value is not entered into a column, you may need to ensure that something is put into the field. This is particularly the case when you are working with NOT NULL fields, which always require a value. A DEFAULT CONSTRAINT gets around this issue by allowing you to specify an initial value for the column, should the database operation against the column results in a value not being specified as part of an INSERT statement. W3 Schools have a handy list that covers all possible CONSTRAINT\u0026rsquo;s within SQL, but be sure to cross reference this with the relevant Microsoft documentation! As an implementation of the standard, T-SQL can have a few nice - but sometimes surprising - differences that you should be aware of. 🙂 The upside of all of this is that, if you have the need to ensure that your database column values are protected against erroneous values, then a CHECK CONSTRAINT is your first port of call. What\u0026rsquo;s even better is that these are something that can be setup rather straightforwardly to ensure, for example, a field is only allowed to conform to a specific set of values.\nA practical illustration is the best way to demonstrate the ease - and potential pitfall - you may hit when working with CHECK CONSTRAINT\u0026rsquo;s containing a large list of potential values. Let\u0026rsquo;s say you want to create a table with a field - TestField - that should only ever accept the values A, B or C. Your CREATE TABLE script would look something like this:\nCREATE TABLE [dbo].[Test] ( [TestField] CHAR(1) NULL, CONSTRAINT CHK_TestField CHECK ([TestField] IN (\u0026#39;A\u0026#39;, \u0026#39;B\u0026#39;, \u0026#39;C\u0026#39;)) ) This works well within the following situations:\nYou are working with a handful of values that need to checked - ideally no more than a dozen. You can guarantee that the list of values will not be subject to frequent changes. If your situation fits within the opposite end of the parameters specified above, you may make the assumption that the best way to build a sustainable solution is via a dedicated lookup table within your database. The idea being with this is the list of values required for the CONSTRAINT can be managed in bulk, updated/removed via common T-SQL statements and also prevents you from managing particularly long-winded table scripts within your database. The following script will create a lookup table that records the fields you require CONSTRAINT\u0026rsquo;s for (assuming this is the case; this can be removed at your discretion) and the values that need checking:\nCREATE TABLE [dbo].[lkp_Test] ( [lkpTestID] INT IDENTITY(1,1) NOT NULL, CONSTRAINT PK_lkp_Test_lkpTestID PRIMARY KEY CLUSTERED ([lkpTestID]), [FieldName] VARCHAR(100) NOT NULL, [Value] VARCHAR(100) NOT NULL ) Those who have good, but not extensive, experience with T-SQL may make the next step assumption that we can then modify our CHECK constraint to directly query the table, similar to the below:\n--To recreate an existing Constraint, it has to be dropped and recreated ALTER TABLE [dbo].[Test] DROP CONSTRAINT CHK_TestField; GO ALTER TABLE [dbo].[Test] ADD CONSTRAINT CHK_TestField CHECK ([TestField] IN (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = \u0026#39;TestField\u0026#39; AND [TestField] = [Value])); GO After executing the command, your next reaction may be confusion as an error is thrown back to you:\nWhat this means is that there is no way that we can define a query within our CONSTRAINT to essentially \u0026ldquo;lookup\u0026rdquo; the values that are allowed from our table and enforce only the list of approved values. The workaround for this is that we can look at utilising a user-defined function, or UDF. The function will perform the necessary query against our lookup table but achieves the requirements of returning a single (i.e. scalar) value, which we can then filter against in our CONSTRAINT. Functions are a topic that has been covered previously on the blog, and, when used effectively, can help to encapsulate complicated functionality within simple objects that can be referenced via your T-SQL queries. Below is a function that can be setup to achieve the requisite conditions for our scenario:\nCREATE FUNCTION dbo.fnGetTestConstraintValues(@FieldName VARCHAR(100), @Value VARCHAR(100)) RETURNS VARCHAR(5) AS BEGIN IF EXISTS (SELECT [Value] FROM [dbo].[lkp_Test] WHERE [FieldName] = @FieldName AND [Value] = @Value) RETURN \u0026#39;TRUE\u0026#39; IF @Value IS NULL RETURN \u0026#39;TRUE\u0026#39; RETURN \u0026#39;FALSE\u0026#39; END GO Let\u0026rsquo;s break down what this is doing in more detail:\nWhen called, you must supply two parameters with values - the name of the field that needs to be checked against (@FieldName) and the value to check (@Value). We\u0026rsquo;ll see how this works in practice in a few moments. The function will always return a single value - either TRUE or FALSE - thereby ensuring we have a scalar value to interact with. Because our TestField has been defined as NULL, we have to add some additional logic to handle these occurrences. SQL Server will not enforce a CHECK constraint for NULL values when entered/updated into a database, but our function does not extend this far. Therefore, unless a record is inserted into our lkp_Test table for our field with a NULL value, these value types will force the CONSTRAINT to be enforced. By adding in an IF condition to check for NULL\u0026rsquo;s and return a value of \u0026lsquo;TRUE\u0026rsquo; in these instances, NULL values can be entered into the database successfully. With this function now in place, our CONSTRAINT just needs to be modified to call the function, verifying that TRUE returns; if not, then the field value will not be allowed:\nADD CONSTRAINT CHK_TestField CHECK (dbo.fnGetTestConstraintValues(\u0026#39;TestField\u0026#39;, [TestField]) = \u0026#39;TRUE\u0026#39;); Now we can ensure that we are able to specify a subquery within our CONSTRAINT within a supported manner - excellent! 🙂 CONSTRAINT\u0026rsquo;s are a really useful tool at the disposal of any database administrator/developer, and something that you should always have at the front of your mind as part of any design. Doing so will ensure that you are building a solution which, as much as realistically conceivable, tries to keep the data within your database as squeaky clean as possible.\n","date":"2017-05-28T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/combining-t-sql-sub-queries-with-check-constraints/","title":"Combining T-SQL Sub Queries with CHECK Constraints"},{"content":"The sheer breadth of ways that you can utilise Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can sometimes boggle the mind. Whether it\u0026rsquo;s through a traditional web browser, mobile app, the new interactive service hub or even through your own website created via the SDK, organisations have an ever-increasing array of routes they can go down when deploying the application into their environment. Despite this, more often than not, you would expect a \u0026ldquo;standard\u0026rdquo; deployment to involve using the application via a web browser, either on a local machine or potentially via a Remote Desktop Session (RDS) instance. Whilst Microsoft\u0026rsquo;s support articles provide fairly definitive software requirements when working on a Windows desktop machine, it is difficult to determine if, for example, Google Chrome on a Windows Server 2012 RDS session is supported. This is an important omission that requires clarification and is worth discussing further to determine if a definitive conclusion can be reached, based on the available evidence.\nIn this week\u0026rsquo;s post, I will attempt to sleuth through the various pieces of evidence I can find on this subject, sprinkling this with some experience that I have had with CRM/D365E and RDS, to see if any definitive conclusion can be established.\nBefore we get into the heart of the matter\u0026hellip; \u0026hellip;it may be useful to provide a brief overview of what RDS is. RDS is a fancy way of describing connecting to a remote computer via the Remote Desktop Connection client on your Windows or OS of choice. Often referred to as Terminal Services, it is a de facto requirement when accessing remote servers for a variety of reasons. Most commonly, you will witness it deployed as part of an internal corporate network, as a mechanism for users to \u0026ldquo;remote on\u0026rdquo; when working outside the office. Due to the familiarity of Windows Server compared with each versions corresponding Desktop OS, the look and feel of working on a normal computer can be achieved with minimal effort, and you can often guarantee that the same types of programmes will also work without issue.\nWhilst RDS is still frequently used, it could be argued to have taken a back seat in recent years with the rise in virtualisation technologies, from the likes of Citrix and VMware. These solutions tend to offer the same benefits an RDS server can, but places more emphasis on utilising a local desktop environment to essentially stream desktops/applications to end users. As a result of the rise of these technologies, RDS is perhaps entering a period of uncertainty; whilst it will continue to be essential for remote server management, there are arguably much better technologies available that provide an enriched end-user experience, but offer the same benefits of having a centralised server within a backed up/cloud environment.\nNow that you (hopefully!) have a good overview of what RDS is, let\u0026rsquo;s take a look at the evidence available in relation to CRM/D365E and RDS\nEvidence #1: Technet Articles The following TechNet articles provide, when collated together, a consolidated view of supported internet browsers and operating systems for CRM/D365:\nSupported web browsers and mobile devices: https://technet.microsoft.com/en-us/library/dn531055.aspx Web application requirements for Microsoft Dynamics 365: https://technet.microsoft.com/en-us/library/hh699710.aspx From this, we can distill the following:\nWindows 10, 8.1, 8 and 7 are supported, so long as they are using a \u0026ldquo;supported\u0026rdquo; browser: Internet Explorer 10 is supported for Windows 7 and 8 only. Internet Explorer 11 is supported for all Windows OS\u0026rsquo;s, with the exception of 8. Edge is supported for Windows 10 only. Firefox and Chrome are supported on all OS\u0026rsquo;s, so long as they are running the latest version. OS X 10.8 (Mountain Lion), 10.9 (Mavericks) and 10.10 Yosemite are supported for Safari only, running the latest version Android 10 is supported for the latest version of Chrome only iPad is supported for the latest version of Safari only (i.e. the latest version of iOS) The implication from this should be clear - although the following Windows Server devices (that are currently in mainstream support) can be running a supported web browser, they are not covered as part of the above operating server list:\nWindows Server 2016 Windows Server 2012 R2 Windows Server 2012 Evidence #2: Notes from the Field I have had extensive experience both deploying into and supporting CRM/D365E environments running RDS. These would typically involve servers with significant user load (20-30 per RDS server) and, the general experience and feedback from end users has always been\u0026hellip;underwhelming. All issues generally came down to the speed of the application which, when compared to running on a standard, local machine, was at a snail\u0026rsquo;s pace by comparison. Things like loading a form, an entity view or Dialog became tortuous affairs and led to serious issues with user adoption across the deployments. I can only assume that the amount of local CPU/Memory required for CRM/D365E when running inside a web application was too much for the RDS server to handle; this was confirmed by frequent CPU spikes and high memory utilisation on the server.\nI can also attest to working with Microsoft partners who have explicitly avoided having issues concerning RDS and CRM/D365E in-scope as part of any support agreement. When this was queried, the reasoning boiled down to the perceived hassle and complexity involved in managing these types of deployment.\nTo summarise, I would argue that this factors in additional ammunition for Evidence piece #1, insomuch as that RDS compatible servers are not covered on the supported operating system lists because these issues are known about generally.\nEvidence #3: What Microsoft Actually Say I was recently involved as part of a support case with Microsoft, where we were attempting to diagnose some of the performance issues discussed above within an RDS environment. The support professional assigned to the case came back and stated the following in regards to RDS and CRM/D365E:\n\u0026hellip;using Windows remote desktop service is supported but unfortunately using Windows server 2012 R2 is not supported. You have to use Windows server 2012. Also windows server 2016 is out of our support boundaries.\nWhilst this statement is not backed up by an explicit online source (and I worry whether some confusion has been derived from the Dynamics 365 for Outlook application - see below for more info on this), it can be taken as saying that Windows Server 2012 is the only supported operating system that can be used to access CRM/D365E, with one of the supported web browsers mentioned above.\nThe Anomalous Piece of Evidence: Dynamics 365 for Outlook Application Whilst it may not be 100% clear cut in regards to supported server operating systems, we can point to a very definitive statement in respect to the Dynamics 365 for Outlook application when used in conjunction with RDS:\nDynamics 365 for Outlook is supported for running on Windows Server 2012 Remote Desktop Services\nSource: https://technet.microsoft.com/en-us/library/hh699743.aspx\nMaking assumptions here again, but can we take this to mean that the web application is supported within Windows Server 2012 RDS environments, as suggested by the Microsoft engineer above? If not, then you may start thinking to yourself \u0026ldquo;Well, why not just use this instead of a web browser on RDS to access CRM/D365E?\u0026rdquo;. Here are a few reasons why you wouldn\u0026rsquo;t really want to look at rolling out the Dynamics 365 for Outlook application any time soon within RDS:\nIf deploying the application into offline mode, then you will be required to install a SQL Express instance onto the machine in question. This is because the application needs to store copies of your synchronised entity data for whenever you go offline. The impact of this on a standard user machine will be minimal at best, but on a shared desktop environment, could lead to eventual performance issues on the RDS server in question\nWith the introduction of new ways to work within CRM/D365 data in an efficient way, such as with the Dynamics 365 App for Outlook, the traditional Outlook client is something that is becoming less of a requirement these days. There are plenty of rumours/commentary on the grapevine that the application may be due for depreciation in the near future, and even Microsoft have the following to say on the subject:\nDynamics 365 App for Outlook isn\u0026rsquo;t the same thing as Dynamics 365 for Outlook. As of the December 2016 update for Dynamics 365 (online and on-premises), Microsoft Dynamics 365 App for Outlook paired with server-side synchronization is the preferred way to integrate Microsoft Dynamics 365 with Outlook.\nI have observed performance issues with the add-in myself in the past - outlook freezing, the occasional crash and also issues with the Outlook ribbon displaying incorrectly.\nAs you can probably tell, I am not a big fan of the add-in, but the writing on the wall is fairly clear - Microsoft fully supports you accessing CRM/D365E from the Outlook client on Windows Server 2012 RDS.\nAfter reviewing all the evidence, do we have enough to solve this case? Whilst there is a lot of evidence to consider, the main thing I would highlight is the lack of a \u0026ldquo;smoking gun\u0026rdquo; in what has been reviewed. What I mean by this is the lack of a clear support article that states either \u0026ldquo;X Browser is supported on Windows Server X\u0026rdquo; or \u0026ldquo;X Browser is NOT supported on Windows Server X\u0026rdquo;. Without any of these specific statements, we are left in a situation where we have to infer that RDS is not a supported option for using the CRM/D365E web application. Certainly, the experience I have had with the web client in these environment types would seem to back this up, which may go some way towards explaining the reason why this is not implicitly supported.\nSo where does this leave you if you are planning to deploy CRM/D365E within an RDS environment? Your only option is to ensure that your RDS environment is running Windows Server 2012 and that your users are utilising the Outlook client, given that there is a very clear statement regarding its supportability. If you are hell bent on ensuring that your end users have the very best experience with CRM/D365E, then I would urge you to reconsider how your environment is configured and, if possible, move to a supported configuration - whether that\u0026rsquo;s local desktop or a VDI, running your browser of choice. Hopefully, the benefits of utilising the application will far outweigh any overriding concerns and business reasons for using RDS in the first place.\n","date":"2017-05-21T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/is-the-dynamics-crmdynamics-365-for-enterprise-web-application-supported-with-rds/","title":"Is the Dynamics CRM/Dynamics 365 for Enterprise Web Application Supported with RDS?"},{"content":"Exams are always something that you end up worrying about to an obsessive degree. The thought of being placed on the spot and expected to demonstrate your knowledge in a particular subject can be daunting to even the most knowledgeable individuals. Technology exams, such as Microsoft certification, can arguably be the worst of all; the level of detailed technical knowledge that you are expected to know off the top of your head can seem almost impossible, particularly for those are heavily reliant on our friend of old, like me! The pace of technological advancement only complicates this further and, when you are working with solutions as fast-paced as Dynamics 365 for Enterprise (D365E), the pace is almost marathon like. New features are added regularly to the application and this invariably leads to new exam content and accreditations to match. The introduction of an MCSA and MCSE for D365E is, arguably, one of the more welcome of recent changes made, and gives those looking to showcase their knowledge a more enhanced way of doing so.\nI have previously reviewed the new exams in more detail on the blog and, after having been through the process and successfully obtained my MCSA and MCSE, I can speak generally about the experience and hopefully guide those who are looking at sitting the exams in the near future. This week\u0026rsquo;s post will provide some general guidance on how you can best prepare for the exams, an overview of the new badge sharing platform, Acclaim, and my general thoughts on the whole experience.\nDisclaimer Per the terms of the Exam NDA, this post will not make reference to any specific exam content; rather, it will discuss broadly what areas you should focus on to get that passing grade. Any discussion or question relating to exam content will be deleted from the comments section.\nThe journey to Accreditation may seem somewhat upside down To achieve your MCSA, and eventually MCSE, Microsoft recommends that you follow a suggested route to attain your certification. Although you are free to pass the exams in any order you wish, it is perhaps strange that, if you follow the prescribed route, you will learn/be tested on how to customise, configure and manage Dynamics 365 before discovering what the application can offer natively. Many of the features available in D365E may very well speed along a deployment, and it is important to always remember the plethora of existing functionality available within the application and not accidentally over-customise when there is no need to.\nDon\u0026rsquo;t underestimate the need to revise\u0026hellip; As with any new exam, the Skills Measured list is updated to reflect features freshly introduced as part of the exams targeted release. If you have not yet had experience with how these work, then I would highly recommend working through the e-learning courses available on the Dynamics Learning Portal in the first instance (some of which, incidentally, are also available on Imagine Academy), targeting yourself towards a) new features and b) functionality that you have the least experience in. With regards to \u0026ldquo;What\u0026rsquo;s New\u0026rdquo; with D365E, I would recommend brushing up on the following subjects as part of your revision:\nChanges to Business Process Flows and processes generally The Dynamics 365 for Outlook App (NOT to be confused with the Dynamics 365 for Outlook application - remember, it\u0026rsquo;s still very much supported 🙂 ) Relationship Insights, and what each constituent component is and their differences: the Relationship Assistant, Email Engagement and Auto Capture The new Dynamics 365 licensing structure. Data Loader Service App Designer There\u0026rsquo;s a good chance, based on each exams specification, that questions on all the above topics could appear on multiple exams, so be sure to prepare yourself for this.\n\u0026hellip;but realise that hands-on experience is the best route towards passing an exam. Simply watching the e-learning courses or reading about new functionality is never sufficient when revising. You should ideally have a D365E tenant available to you as part of your revision (trials can be set up in a pinch) and be working through features of the application as you are learning. The above e-learning courses include labs, which is an excellent starting point; from there, you should very much work through setting up features from scratch, navigating around the interface and understanding what various buttons/actions do. You may surprise yourself and discover something about the application that you previously overlooked; something which happens to me all the time!\nBe sure to setup your Acclaim account after passing One of the nifty new perks of passing an exam this year is the introduction of Acclaim, which aims to provide a simplified mechanism of collating together your various accreditations across multiple vendors in a fun way. Upon passing your first exam, you will be sent an email within 24 hours of passing to let you know your badge is waiting to be claimed:\nTo accept the badge, you will need to setup an account with Acclaim. After this, all subsequent achievements will be saved to the same account, enabling you to build up your \u0026ldquo;Acclaim transcript\u0026rdquo; as you pass more exams. The social features of the application are varied and quite nice, meaning that you can quickly share the news of your exam pass with friends, colleagues and family members at the click of a button. Currently, LinkedIn, Twitter and Facebook are supported at the time of writing. Finally, you can download images of your badges and include them as part of job applications/C.V\u0026rsquo;s, helping them stand out more visually.\nIf you are interested in finding out more about Acclaim, then you can check out my profile to get a feel for what it can offer you. Suffice it to say, having a straightforward means of sending potential customers/employers a website link to my Acclaim profile as opposed to an entire Microsoft transcript would undoubtedly simplify things. Now, if only you could get physical badges that you could stick on your bag\u0026hellip; 🙂\nConclusions or Wot I Think My own personal journey towards obtaining my first MCSA and MCSE has been challenging and rewarding in equal measure. It feels really good to know that D365E has a proper set of accreditations that individuals can aspire towards obtaining and which exemplify the position of the application alongside other, well-known Microsoft solutions. That is not to say that exams are a definitive means of judging your expertise with a particular product, and an exam fail may indicate a spur of the moment, misjudged answer or a lack of revision for a particular new feature. This post may put people off from trying for an exam, due to the effort involved, but it\u0026rsquo;s important that you are not daunted in any way. I would readily encourage people who have a passion for D365E to put aside any concerns and not delay in working towards passing each of the exams. By doing so, you can proactively demonstrate your commitment towards D365E and the zeal that you have for it, giving those around you the confidence that you not just talk the talk, but can walk the walk as well.\n","date":"2017-05-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-365-mcsamcse-certification-preparation-tips-and-general-thoughts/","title":"Dynamics 365 MCSA/MCSE Certification: Preparation Tips and General Thoughts"},{"content":"The very recent Microsoft Data Amp event provided an excellent forum for the SQL Server 2017 announcement, which is due to be released at some point this year. Perhaps the most touted feature of the new version is that it will be available to be installed on Linux; an entirely inconceivable premise 10 years ago, which just goes to show how far Microsoft have changed in their approach to supporting non-Windows platforms as standard. Past the big headline announcements, there is a lot to look forward to underneath the hood with SQL Server 2017 that may act as encouragement for organisations looking to upgrade in the near future.\nIn this week\u0026rsquo;s post, I\u0026rsquo;ll be taking a closer look at 3 new features I am most looking forward to, that are present within the SQL Server Community Technical Preview (CTP) 2.0 version and which will form part of the SQL Server 2017 release later on this year.\nPower BI in SSRS: A Match Made in Heaven This is by far the feature I am most looking forward to seeing in action. I have been working more and more with Power BI this year, often diving into the deep-end in respect to what can be achieved with the product, and I have been impressed with how it can be used for addressing reporting scenarios that SSRS may struggle with natively. The announcement earlier this year that Power BI would be included as part of SSRS in the next major release of the product was, therefore, incredibly welcome and its inclusion as part of SQL Server 2017 is confirmed by the inclusion of Power BI reports in the CTP 2.0 release.\nFor those who are already familiar with Power BI, there is thankfully not much that you need to learn to get up and running with Power BI in SSRS. One thing to point out is that you will need to download a completely separate version of the Power BI Desktop App to allow you to deploy your Power BI reports to SSRS. I would hope that this is mitigated once SQL Server 2017 is released so that we are can deploy from just a single application for either Online or SSRS 2017. Users who are experienced with the existing Power BI Desktop application should have no trouble using the equivalent product for SSRS, as they are virtually identical.\nThe actual process of deploying a Power BI report is relatively straightforward. After making sure that you have installed the SSRS Power BI Desktop Application, you can then navigate to your SSRS homepage and select + New -\u0026gt; Power BI Report:\nYou will be greeted with a prompt similar to the below and the Power BI Desktop application will open automatically:\nNow it\u0026rsquo;s time to build your report. 🙂 As an example, I have used the WideWorldImporters Sample Database to build a simplistic Power BI report:\nIf you were working with Power BI online, then this would be the stage where you would click the Publish button to get it onto your online Power BI tenant. The option to deploy to your SSRS instance is currently missing from Power BI in SSRS application; instead, you will need to manually upload your .pbix file into Reporting Services via the Upload button. Once uploaded, your report will be visible on the home page and can be navigated to in the usual manner:\nSimplified CSV Importing Anyone who has at least some experience working with databases and application systems should have a good overview of the nuances of delimited flat file types - in particular, Comma Separated Value (.csv) files. This file type is generally the de-facto format when working with data exported from systems and, more often than not, will be the most common file type that you will regularly need to import into a SQL Server database. Previously, if you didn\u0026rsquo;t opt to use the Import Wizard/.dtsx package to straightforwardly get your .csv file imported, you would have to rely on the following example script:\nBULK INSERT dbo.TestTable FROM \u0026#39;C:\\Test.csv\u0026#39; WITH ( FIELDTERMINATOR = \u0026#39;,\u0026#39;, ROWTERMINATOR = \u0026#39;\\n\u0026#39; ) Now, with SQL Server 2017, you can simplify your query by replacing FIELDTERMINATOR and ROWTERMINATOR with a new FORMAT parameter, that specifies the file format we are concerned with:\nBULK INSERT dbo.TestTable FROM \u0026#39;C:\\Test.csv\u0026#39; WITH (FORMAT = \u0026#39;CSV\u0026#39;); Whilst the overall impact on your query length is somewhat negligible, it is nice that a much more simplified means of accomplishing a common database task has been introduced and that we now also have the option of accessing Azure Blob Storage locations for import files.\nUpdated Icons for SSMS Typically, as part of any major update to the application, the \u0026ldquo;under the hood\u0026rdquo; visual side of things are generally not changed much. A good example of this can be found within CRM Online/Dynamics 365 for Enterprise within the Customizations area of the application, which has not seen much of a facelift since CRM 2011. As a result, a lot of the icons can look inconsistent with the application as a whole. As these are generally the areas of the application that we use the most day in, day out, it can be a little discouraging not to see these areas get any love or attention as part of a major update\u0026hellip; 🙁\nWith this in mind, it is pleasing to see that the updated SSMS client for SQL Server 2017 has been given refreshed icons that bring the application more in line with how Visual Studio and other Microsoft products are looking these days. Below is a comparison screenshot, comparing SSMS 2014 with SSMS 2017:\nConclusions or Wot I Think Whilst there is a lot more to look forward to with the new release that is not covered in this post (for example, the enhancements to R server and deeper integration with AI tools), I believe that the most exciting and important announcement for those with their Business Intelligence/Reporting hats on is the introduction of Power BI into SSRS. Previously, each tool was well suited for a specific reporting purpose - SSRS was great for designing reports that require a lot of visual tailoring and widely common formats for exporting, whereas Power BI is more geared towards real-time, dashboard views that marry together disparate data sources in a straightforward way. By being able to leverage SSRS to fully utilise Power BI reports, the application suddenly becomes a lot more versatile and the potential for combining together functionality becomes a lot more recognisable. So, for example, having the ability to drill down to an SSRS report from a Power BI report would be an excellent way of providing reporting capabilities that satisfy end-user consumption in 2 different, but wildly applicable, scenarios.\nIn summary, the SQL Server 2017 release looks to be very much focused on bringing the product up to date with the new state of play at Microsoft, successfully managing to achieve cross-platform requirements alongside bringing exciting functionality (that was previously cloud-only) into the hands of organisations who still have a requirement to run their database systems on their on-premise infrastructure. I\u0026rsquo;m eagerly looking forward to the release later on this year and in seeing the product perform in action. 🙂\n","date":"2017-05-07T00:00:00Z","image":"/images/AzureSQL-FI.png","permalink":"/whats-new-in-sql-server-2017/","title":"What's New in SQL Server 2017"},{"content":"Back in the days of Dynamics CRM 2016 and earlier, one of the major benefits of opting towards an Online subscription versus an On-Premise license was the Dual-Usage rights granted to your organisation. This meant that, so long as your On-Premise Server installation was licensed, your individual Online User CAL\u0026rsquo;s would also be licensed like for like for CRM On-Premise. So, if you had 5 CRM Online Professional licenses on Office 365, you were also covered for the equivalent 5 On-Premise Professional user license CAL\u0026rsquo;s.\nThe Dynamics 365 for Enterprise (D365E) release took this offer a step further by also including the coveted Server user license as part of this offer. The official licensing guide for D365E elaborates on this further:\nOne of the advantages of Microsoft Dynamics 365 is the option to deploy either in Microsoft\u0026rsquo;s cloud or in a private on-premises or partner-hosted cloud. In some cases, customers may want to deploy both modes simultaneously, for migrating a Microsoft Dynamics on-premises deployment to Microsoft Dynamics 365, running private Dev/Test deployments in Microsoft Azure. With Dual Use Rights, Microsoft Dynamics users licensed with the required User SL do not need to acquire CALs to access Server instances. Users or devices licensed with Dynamics 365 SLs have use rights equivalent to a CAL for the purpose of accessing equivalent on-premise workloads. With Microsoft Dynamics 365 the server license is included with the SLs.\nThanks to this change, organisations can look to fully realise their dual-usage ambitions with D365E, without having to pay a single additional penny - nice! 🙂\nWhen I first learned about the above, my first question was \u0026ldquo;Great! How do I get my license key for D365E on-premise?\u0026rdquo;. Unfortunately, the documentation available online is not 100% clear on which route you need to take to get your license key. Attempting to try and get an answer directly from Microsoft Online support can sometimes lead to you being redirected to the above licensing guide, which does not set out clearly how the offer works and a step-by-step account of how to claim your license key. I was recently involved in attempting to obtain a dual-usage license key for an organisation, so I thought I would share my experiences as part of this weeks blog post and provide a straightforward guide for others who may find themselves in the same boat.\nBefore you Begin\u0026hellip; The route you have to traverse will be dictated significantly by the method in which your organisation has obtained your Online D365E licenses. It could be that your business has:\nOrdered your licenses directly on the Office 365 portal. Purchased your online subscription through a Microsoft Partner. Obtained a redemption key for your subscription via a Volume License agreement or similar with Microsoft. So before you set out on your dual-usage license key quest, verify how your organisation originally obtained your D365E licenses. Assuming you have this information to hand, please see below for a summary that covers each license purchase route:\nIf you purchased your licenses via a Cloud Solutions Provider agreement (i.e. directly with a Microsoft partner)\u0026hellip; Then your license key should be viewable within your CustomerSource profile page for your organisation, underneath the Product And Service Summary Section.\nIf you purchased your licenses via a Microsoft Products and Services Agreement\u0026hellip; Your license key should be viewable within your Microsoft Business Centre page.\nIf you purchased your licenses via an Enterprise/Volume License Agreement\u0026hellip; Log into the Volume Licensing Service Centre and, underneath your list of Products, you should see your product and corresponding license key.\nIf you purchased your licenses directly via Office 365 and have a partner of record for your subscription\u0026hellip; You should reach out to them directly and they can then log a support request with Microsoft\u0026rsquo;s Pre-Sales team. The turn-around for this will generally be a couple of days, and at the end of it, you should be emailed your license key.\nIf you purchased your licenses directly via Office 365 and DO NOT have a partner of record for your subscription\u0026hellip; Then I believe you will need to log a support request directly with Microsoft Online support to obtain the license key information. I am unable to confirm whether this will work successfully or not, so I would be interested in hearing from anyone in the comments below if this works.\nGetting D365E On-Premise Installed This is essentially a two-step process. Given that the D365E release was not a major application release, there is no dedicated installer available for the product. Instead, you will need to install Dynamics CRM Server 2016 and then download and install the December 2016 update to get the application version up to 8.2. All of the usual pre-requisites for a CRM On-Premise install will apply - you will need a SQL Server instance deployed, an Active Directory to authenticate with and be running a compatible version of Windows Server. The full list of requirements can be viewed on TechNet.\nConclusions or Wot I Think The expansion of the Dual Usage offering as part of D365E is a welcome and highly beneficial development. Assuming your organisation already has existing infrastructure in place that supports an On-Premise deployment, you can begin to mitigate any extra costs required for additional sandbox instances on Office 365 by quickly setting up local D365E organisations on the On-Premise version of the application. I think there is definitely some work to be done around the whole process of obtaining the license key in the first instance - so, for example, perhaps a button/link in the Dynamics 365 Administration Centre that lets you view your On-Premise license key or fill out a request form - but the very fact that organisations are able to take advantage of this in the first place is one of the reasons why D365E is starting to move ahead of the competition within the ERP/CRM application space.\n","date":"2017-04-30T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/how-to-obtain-your-dual-usage-license-key-for-dynamics-365-for-enterprise/","title":"How to Obtain your Dual-Usage License Key for Dynamics 365 for Enterprise"},{"content":"When you have spent any length of time working with Dynamics CRM Online/Dynamics 365 for Enterprise (D365E) data programmatically, you become accustomed to how Option Set, State and Status Reason values are presented to you in code. To explain, the application does not store your Option Set value display names within the SQL Server Entity tables; rather, the Option Set Value that has been specified alongside your Label is what is stored as an integer value. That is why you are always mandatorily prompted to provide both values within the application:\nThe following benefits are realised as a result of how this is setup:\nOption Set Labels can be quickly updated for all records in the application if business requirements were to change. Duplicate labels are technically supported (although not recommended). As highlighted by marc_c, there is a performance benefit for the platform in using integers for this type of data. That being said, when working with these field types in code, you do always have to have the application window open or a list of all Labels/Values to hand so that you don\u0026rsquo;t get too confused\u0026hellip; 🙂\nI have previously extolled the virtues of the Data Export Service on the blog, and why you should consider it if you have basic integration requirements for your CRM/D365E deployment. One area in which it differs from other products on the market is how it handles the field types discussed above. For example, when exporting data to a SQL database via Scribe Online, new columns are created alongside that contain the \u0026ldquo;Display Name\u0026rdquo; (i.e. label value) that correspond to each Option, Status and Status Reason Label. So by running the following query against a Scribe export database:\nSELECT DISTINCT statecode, statecode_displayname FROM dbo.account We get the best of both worlds - our underlying statecode value and their display names - all in 2 lines of code:\nThis is a big help, particularly when you are then using the data as part of a report, as no additional transformation steps are required and your underlying SQL query can be kept as compact as possible.\nThe Data Export Service differs from the above in an understandable way, as display name values for Status, Status Reason and Option Set column values are instead segregated out into their own separate table objects in your Azure SQL database:\nOptionSetMetadata\nGlobalOptionSetMetadata\nStateMetadata\nStatusMetadata\nWhy understandable? If you consider how the application can support multiple languages, then you realise that this can also apply to metadata objects across the application - such as field names, view names and - wouldn\u0026rsquo;t you have guessed it - Labels too. So when we inspect the OptionSetMetadata table, we can see that the table structure accommodates the storing of labels in multiple languages via the LocalizedLabelLanguageCode field:\nUnlike the Scribe Online route above (which I assume only retrieves the Labels that correspond to the user account that authenticates with CRM/D365E), the Data Export Service becomes instantly more desirable if you are required to build multi-language reports referencing CRM/D365E application data.\nThe issue that you have to reconcile yourself with is that your SQL queries, if being expressed as natively as possible, instantly become a whole lot more complex. For example, to achieve the same results as the query above, it would have to be adapted as follows for the Data Export Service:\nSELECT DISTINCT statecode, LocalizedLabel FROM dbo.account LEFT JOIN dbo.StateMetadata ON \u0026#39;account\u0026#39; = EntityName AND statecode = [State] AND \u0026#39;1033\u0026#39; = LocalizedLabelLanguageCode The above is a very basic example, but if your query is complex - and involves multiple Option Set Values - then you would have to resort to using Common Table Expressions (CTE\u0026rsquo;s) to accommodate each potential JOIN required to get the information you want.\nIn these moments, we can look at some of the wider functionality provided as part of SQL Server to develop a solution that will keep things as simple as possible and, in this particular instance, a user-defined function is an excellent candidate to consider. These enable you to perform complex operations against the database platform and encapsulate them within very simply expressed objects that can also accept parameters. The good thing about functions is that they can be used to return table objects and scalar (i.e. single) objects.\nUsing a scalar function, we can, therefore, remove some of the complexity behind returning Option Set, Status and Status Reason labels by creating a function that returns the correct label, based on input parameters received by the function. You could look at creating a \u0026ldquo;master\u0026rdquo; function that, based on the input parameters, queries the correct Metadata table for the information you need; but in this example, we are going to look at creating a function for each type of field - Status, Status Reason, Option Set and Global Option Set.\nTo do this, connect up to your Data Export Service database and open up a new query window, ensuring that the context is set to the correct database. Paste the following code in the window and then hit Execute:\nSET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO --Create Function to return Global Option Set Labels CREATE FUNCTION [dbo].[fnGetGlobalOptionSetLabel] ( @GlobalOptionSetName NVARCHAR(64), --The logical name of the Global Option Set @Option INT, --The option value to retrieve @LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html ) RETURNS NVARCHAR(256) AS BEGIN DECLARE @Label NVARCHAR(256); DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode); IF @RecordCount = 1 SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.GlobalOptionSetMetadata WHERE OptionSetName = @GlobalOptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode); ELSE SET @Label = CAST(\u0026#39;An error has occurred. Could not obtain label for Global Option Set field \u0026#39; + @GlobalOptionSetName AS INT); RETURN @Label; END GO --Create Function to return Option Set Labels CREATE FUNCTION [dbo].[fnGetOptionSetLabel] ( @EntityName NVARCHAR(64), --The Entity logical name that contains the Option Set field @OptionSetName NVARCHAR(64), --The logical name of the Option Set field @Option INT, --The option value to retrieve @LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html ) RETURNS NVARCHAR(256) AS BEGIN DECLARE @Label NVARCHAR(256); DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode); IF @RecordCount = 1 SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.OptionSetMetadata WHERE EntityName = @EntityName AND OptionSetName = @OptionSetName AND [Option] = @Option AND LocalizedLabelLanguageCode = @LanguageCode); ELSE SET @Label = CAST(\u0026#39;An error has occurred. Could not obtain label for Option Set field \u0026#39; + @OptionSetName AS INT); RETURN @Label; END GO --Create Function to return Status Labels CREATE FUNCTION [dbo].[fnGetStateLabel] ( @EntityName NVARCHAR(64), --The Entity logical name that contains the Status field @State INT, --The Status option value to retrieve @LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html ) RETURNS NVARCHAR(256) AS BEGIN DECLARE @Label NVARCHAR(256); DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode); IF @RecordCount = 1 SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StateMetadata WHERE EntityName = @EntityName AND [State] = @State AND LocalizedLabelLanguageCode = @LanguageCode); ELSE SET @Label = CAST(\u0026#39;An error has occurred. Could not obtain State label for entity \u0026#39; + @EntityName AS INT); RETURN @Label; END GO --Create Function to return Status Reason Labels CREATE FUNCTION [dbo].[fnGetStatusLabel] ( @EntityName NVARCHAR(64), --The Entity logical name that contains the Status Reason field @Status INT, --The Status Reason option value to retrieve @LanguageCode INT --The Language of the label to retrieve. English is 1033. Full list of support languages (Correct as of June 2015) can be found here: https://abedhaniyah.blogspot.co.uk/2015/06/list-of-supported-languages-by.html ) RETURNS NVARCHAR(256) AS BEGIN DECLARE @Label NVARCHAR(256); DECLARE @RecordCount INT = (SELECT COUNT(*) FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode); IF @RecordCount = 1 SET @Label = (SELECT TOP 1 LocalizedLabel FROM dbo.StatusMetadata WHERE EntityName = @EntityName AND [Status] = @Status AND LocalizedLabelLanguageCode = @LanguageCode); ELSE SET @Label = CAST(\u0026#39;An error has occurred. Could not obtain Status label for entity \u0026#39; + @EntityName AS INT); RETURN @Label; END GO This will then go off and create the functions listed in code, which should then show up under the Programmability folder on your SQL database:\nFor those who are unsure at what the SQL code is doing, it first attempts to determine if only 1 Label can be found for your appropriate field type, based on the parameters provided. If it is successful, then a value is returned; otherwise, the CAST function is designed to force an error to return back to the caller to indicate that none or more than 1 Option Set value was found. In most cases, this would indicate a typo in the parameters you have specified.\nAs with anything, the best way to see how something works is in the practice! So if we again look at our previous examples shown in this post, we would utilise the dbo.fnGetStateLabel function as follows to return the correct label in English:\nSELECT DISTINCT statecode, dbo.fnGetStateLabel(\u0026#39;account\u0026#39;, statecode, 1033) AS statecode_displayname FROM dbo.account With our results returning as follows:\nNow we can expose this through our reports and not worry about having to do any kind of transformation/lookup table to get around the issue. 😁\nAttempting to keep things as simple as possible by encapsulating complex functionality into simply and clearly expressed functions is an excellent way of ensuring that code can be kept as streamlined as possible, and also ensures that other colleagues can accomplish complex tasks, even if they do not have in-depth knowledge of Transact-SQL.\n","date":"2017-04-23T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/using-sql-server-functions-to-simplify-option-set-label-retrieval-dynamics-365-for-enterprise-data-export-service/","title":"Using SQL Server Functions to Simplify Option Set Label Retrieval (Dynamics 365 for Enterprise Data Export Service)"},{"content":"Although CRM Online/Dynamics 365 for Enterprise (D365E) does provide a plethora of different tools aimed at satisfying reporting requirements for users of the application, you are restricted in how data can be queried within the application. For example, you cannot just connect straight up to the applications SQL database and start writing stored procedures that perform complex data transformations or joins. Traditionally, to achieve this, you would need to look at one of the several tools in the marketplace that enable you to export your data out into a format that best pleases you; or even take the plunge and get a developer to write your own application that satisfies your integration requirements.\nWith the recent D365E release and in-line with Microsoft\u0026rsquo;s longstanding approach to how they approach customer data within their applications (i.e. \u0026ldquo;It\u0026rsquo;s yours! So just do what you want with it!), the parallel introduction of the Data Export Service last year further consolidates this approach and adds an arguably game-changing tool to the products arsenal. By using the service, relatively straightforward integration requirements can be satisfied in a pinch and a lot of the headache involved in setting up a backup of your organisation\u0026rsquo;s databases/LOB reporting application can be eliminated. Perhaps the most surprising and crucial aspect of all of this is that using this tool is not going to break the bank too much either.\nIn this week\u0026rsquo;s blog post, I\u0026rsquo;m going to take a closer look at just what the Data Export Service is, the setup involved and the overall experience of using the service from end-to-end.\nWhat is the Data Export Service? The Data Export Service is a new, free*, add-on for your CRM/D365E subscription, designed to accomplish basic integration requirements. Microsoft perhaps provides the best summary of what the tool is and what it can achieve via TechNet :\nThe Data Export Service intelligently synchronizes the entire Dynamics 365 data initially and thereafter synchronizes on a continuous basis as changes occur (delta changes) in the Microsoft Dynamics 365 (online) system. This helps enable several analytics and reporting scenarios on top of Dynamics 365 data with Azure data and analytics services and opens up new possibilities for customers and partners to build custom solutions.\nThe tool is compatible with versions 8.0, 8.1 and 8.2 of the application, which corresponds the following releases of CRM Online/D365E:\nDynamics CRM Online 2016 Dynamics CRM Online 2016 Update 1 Dynamics 365 December Update *You will still need to pay for all required services in Azure, but the add-on itself is free to download.\nThe Installation Process Getting everything configured for the Data Export Service can prove to be the most challenging - and potentially alienating - part of the entire process. For this, you will need the following at your disposal:\nAn active Azure Subscription. An Azure SQL Server configured with a single database or an Azure VM running SQL Server. Microsoft recommends a Premium P1 database or better if you are using an Azure SQL database, but I have been able to get the service working without any issue on S0 tier databases. This is an important point to make, given the cost difference per month can amount to hundreds of £\u0026rsquo;s. An Azure Key Vault. This is what will securely store the credentials for your DB. PowerShell and access to the Azure Resource Manager (AzureRM) Cmdlets. Powershell can be installed as an OS feature on Windows based platforms, and can now be downloaded onto OS X/Linux as well. PowerShell is required to create an Azure Key Vault, although you can also use it to create your Azure SQL Server instance/Windows VM with SQL Server. It is therefore recommended that you have at least some experience in how to use Azure - such as creating Resource Groups, deploying individual resources, how the interface works etc. - before you start setting up the Data Export Service. Failing this, you will have to kindly ask your nearest Azure whizz for assistance 🙂 Fortunately, if you know what you\u0026rsquo;re doing, you can get all of the above setup very quickly; in some cases, less than 10 minutes if you opt to script out the entire deployment via PowerShell.\nFor your setup with D365E, all is required is the installation of the approved solution via the Dynamics 365 Administration Centre. Highlight the instance that you wish to deploy to and click on the pen icon next to Solutions:\nThen click on the Solution with the name Data Export Service for Dynamics 365 and click the Install button. The installation process will take a couple of minutes, so keep refreshing the screen until the Status is updated to Installed. Then, within the Settings area of the application, you can access the service via the Data Export icon:\nBecause the Data Export Service is required to automatically sign into an external provider, you may also need to verify that your Web Browser pop-up settings/firewall is configured to allow the https://discovery.crmreplication.azure.net/ URL. Otherwise, you are likely to encounter a blank screen when attempting to access the Data Export Service for the first time. You will know everything is working correctly when you are greeted with a screen similar to the below:\nSetting up an Export Profile After accepting the disclaimer and clicking on the New icon, you will be greeted with a wizard-like form, enabling you to specify the following:\nMandatory settings required, such as the Export Profile Name and the URL to your Key Vault credentials. Optional settings, such as which database schema to use, any object prefix that you would like to use, retry settings and whether you want to log when records are deleted. The Entities you wish to use with the Export Service. Note that, although most system entities will be pre-enabled to use this service, you will likely need to go into Customizations and enable any additional entities you wish to utilise with the service via the Change Tracking option: Any Relationships that you want to include as part of the sync: To clarify, this is basically asking if you wish to include any default many-to-many (N:N) intersect tables as part of your export profile. The list of available options for this will depend on which entities you have chosen to sync. For example, if you select the Account, Lead and Product entities, then the following intersect tables will be available for synchronisation: Once you have configured your profile and saved it, the service will then attempt to start the import process.\nThe Syncing Experience A.K.A Why Delta Syncing is Awesome When the service first starts to sync, one thing to point out is that it may initially return a result of Partial Success and show that it has failed for multiple entities. In most cases, this will be due to the fact that certain entities dependent records have not been synced across (for example, any Opportunity record that references the Account name Test Company ABC Ltd. will not sync until this Account record has been exported successfully). So rather than attempting to interrogate the error logs straightaway, I would suggest holding off a while. As you may also expect, the first sync will take some time to complete, depending on the number of records involved. My experience, however, suggests it is somewhat quick - for example, just under 1 million records takes around 3 hours to sync. I anticipate that the fact that the service is essentially an Azure to Azure export no doubt helps in ensuring a smooth data transit.\nFollowing on from the above, syncs will then take place as and when entity data is modified within the application. The delay between this appears to be very small indeed - often tens of minutes, if not minutes itself. This, therefore, makes the Data Export Service an excellent candidate for a backup/primary reporting database to satisfy any requirements that cannot be achieved via FetchXML alone.\nOne small bug I have observed is with how the application deals with the listmember intersect entity. You may get an errors thrown back that indicate records failed to sync across successfully, which is not the case upon closer inspection. Hopefully, this is something that may get ironed out and is due to the rather strange way that the listmember entity appears to behave when interacting with it via the SDK.\nConclusions or Wot I Think For a free add-on service, I have been incredibly impressed by the Data Export Service and what it can do. For those who have previously had to fork out big bucks for services such as Scribe Online or KingswaySoft in the past to achieve very basic replication/reporting requirements within CRM/D365E, the Data Export Service offers an inexpensive way of replacing these services. That\u0026rsquo;s not to say that the service should be your first destination if your integration requirements are complex - for example, integrating Dynamics 365 with SAP/Oracle ERP systems. In these cases, the names mentioned above will no doubt be the best services to look at to achieve your requirements in a simplistic way. I also have a few concerns that the setup involved as part of the Data Export Service could be a barrier towards its adoption. As mentioned above, experience with Azure is a mandatory requirement to even begin contemplating getting setup with the tool. And your organisation may also need to reconcile itself with utilising Azure SQL databases or SQL Server instances on Azure VM\u0026rsquo;s. Hopefully, as time goes on, we may start to see the setup process simplified - so, for example, seeing the Export Profile Wizard actually go off and create all the required resources in Azure by simply entering your Azure login credentials.\nThe D365E release has brought a lot of great new functionality and features to the table, that has been oft requested and adds real benefit to organisations who already or plan to use the application in the future. The Data Export Service is perhaps one of the great underdog features that D365E brings to the table, and is one that you should definitely consider using if you want a relatively smooth sailing data export experience.\n","date":"2017-04-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/microsoft-dynamics-365-data-export-service-review/","title":"Microsoft Dynamics 365 Data Export Service: Review"},{"content":"As I tweeted a couple of days ago, my head has been spinning with Dynamics 365 for Enterprise (D365E) recently 🙂 :\nDefinitely a very #MSDYN365 packed day - passed MB2-716 and successfully upgraded our organisations 2016 instance to 8.2! Phew! 😅\n\u0026mdash; Joe Griffin | #ProCodeNoCodeUnite (@joejgriffin) April 7, 2017 I took a detailed look at the upgrade process involved as part CRM Online organisations last year, and thankfully the process has not changed much. Indeed, the whole upgrade seemed to complete a lot quicker - 40-50 minutes as opposed to over an hour, which was nice to see.\nAs part of any planned upgrade, you should always endeavour to perform a thorough test of your existing application customisations with an environment running the latest version - either with a spare sandbox instance on your subscription or by spinning up a 30 day trial of D365E. We were quite thorough as part of our upgrade process with respect to testing, and fortunately, the upgrade completed with only some minor issues left to deal with. For those who are contemplating or have their upgrade scheduled in over the next few weeks/months, then there will be a few things that you may need to be aware of ahead of time to avoid you having to deal with any potential problems with your new D365E deployment. With this in mind, here are 3 things from our upgrade process that bear in mind before you make the jump to 8.2:\nSwitch Off Learning Path in System Settings To Prevent Annoying Popups\nAttempting to keep up with the number of new features that Dynamics 365 brings to the table is a colossal task. It was for this reason that I only became aware of the new Learning Path feature. Boasting functionality that is not too dissimilar to products such as WalkMe and available across the whole spectrum of the D365E experience (Web Client, ISH and Mobile/Tablet app), the feature is designed to provide a guided means of training new application users on how Dynamics 365 works within the specific context of your business. Induction and new user training can be one of the major hurdles that can affect the success of a system deployment, so having a very contextualised, built-in and guided process of training and reminding users how to complete tasks within the application can surely be an important tool for any organisation to have at their disposal.\nUnfortunately, the feature looks to be a little bit too intrusive from an end user experience viewpoint, as leaving the feature enabled post-upgrade will result in the application attempting to open pop-ups through your browser of choice:\nTo disable the feature until you are ready to start rolling it out across your business, then you have two options at your disposal:\nDirect your end-users to select the Opt Out of Learning Path option from the gear icon on the D365E sitemap area: Go to System Settings and then select No under the Enable Learning Path option. This is the recommended option, as it will disable the feature across the board for all users: Modify Your Error Notification Preferences Options for all users\nError messages can occur occasionally within the web application. Generally, these will take the form of the Send Error Report To Microsoft variety, and can result from either a problem within the application itself or an error that has been caused by a developer customisation (e.g. JScript function, Sitemap amend etc.). The default setting for this is that users will be prompted before an error report is sent to Microsoft on these occasions. Having the default setting enabled can prove useful when diagnosing issues with the application, but could cause problems and distress for your end users if the application is throwing them regularly.\nWhether due to customisations involved as part of the above upgrade system or a fault with D365E itself, these error messages seem to be throwing a lot more often in the latest version of the application; in fact, almost pretty much every time a user leaves a record. The error messages are sufficiently non-descript and lacking any reference to customisations made to the system (such as a JScript function name) to indicate that it could be a problem with the customisations itself. By selecting the option to send the error reports to Microsoft, you can ensure that these errors will be looked into and hopefully addressed as part of a security update in the future. But I would recommend, if you are upgrading to D365E, to ensure that have selected the Automatically send an error report to Microsoft within asking the user for permission on the Privacy Settings page to ensure that your end-users are not getting bombarded with constant error messages:\nDon\u0026rsquo;t Upgrade Just Yet If You Are Using Scribe Online\nScribe Online is currently one of the de-facto tools of choice if you are looking to accomplish very basic integration requirements around CRM. The tool enables you to straightforwardly export your application data into external sources - whether they are SQL-based data sources or even completely different applications altogether. I have not had much direct experience with the tool myself, but I can attest to its relative ease-of-use; I do take issue, however, with how the tool operates within a CRM environment. For example, it creates a custom entity directly within your CRM instance within the default solution, using the default solution prefix (new_). Most ISV solutions instead deploy any required customisations out to the application using the much better supported and best practice route of Managed Solutions, allowing application administrators to better determine which components are deployed out as part of a 3rd party solution and to expedite any potential removal of the solution in future. Having said all that, Scribe Online should be your first port of call if you have a requirement to integrate with external systems as part of your CRM solution.\nNow, I deliberately avoided mentioning D365E in the above paragraph, as it looks as if the Scribe Online tool has issues either as a direct result of the upgrade process involved with D365E or due to Scribe Online itself. Shortly after upgrading, the application/Scribe Online will modify the properties of your entity records to set the modifiedon field to the same value for every single entity record. If the number of records in your entity exceeds the default amount of records that can be returned programmatically (thereby requiring the use of a paging cookie), then Scribe will return an error message similar to the below when it next attempts to run your RS solution:\nUnable to get the next page of data. Dynamics CRM has not advanced the page cookie for Entity:new_mycustomentity, PagingCookie: \u0026lt;new_mycustomentityid last=\u0026quot;{A56661B7-C969-E611-80EF-5065F38A8A01}\u0026quot; first=\u0026quot;{797EAA25-1645-E611-80E1-5065F38A4AD1}\u0026quot; /\u0026gt;\nThis issue looks to be occurring for other organisations who have upgraded as well, and Scribe have published an online support article with a suggested workaround for this situation provided by Felix Chan:\nTo work around the issue, we used JavaScript with the Microsoft Dynamics 365 Web API to update all of the account records by changing the value of a field we don\u0026rsquo;t use (e.g. telephone3) from null to \u0026quot;\u0026quot; (which translated back to null). Needless to say, this effectively updated the modifiedon datetime stamp. It also resulted in the change to Telephone 3 to show up in the Audit History of each account record.\nThe above Workaround is all very well and good if you dealing with a small number of records and have the appropriate knowledge on how to implement some form-level JScript functions. But my concern will be for organisations who lack this knowledge and are instead left with a solution that does not work. Despite not having firm proof of this either, I suspect that the issue is a fault with Scribe itself and not as a result of the upgrade. This is based solely on the value of the modifiedon field being well after the upgrade has taken place and during the time when our RS Solution was running. Scribe need to ideally acknowledge the existence of this issue and confirm what is causing the error to take place; but, in the meantime, if you are reliant on Scribe Online for business-critical integrations, I would strongly recommend to hold off on upgrading until this issue is acknowledged or until you can identify a replacement service that does not suffer from this problem. In our case, we were only using Scribe Online to backup our application data to an Azure SQL database and were instead able to get up and running quickly with the rather excellent Dynamics 365 Data Export Service.\n","date":"2017-04-09T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/3-things-to-watch-out-for-when-upgrading-to-dynamics-365-for-enterprise/","title":"3 Things To Watch Out For When Upgrading to Dynamics 365 for Enterprise"},{"content":"When working with form-level JScript functionality on Dynamics CRM/Dynamics 365 for Enterprise (D365E), you often uncover some interesting pieces of exposed functionality that can be utilised neatly for a specific business scenario. I did a blog post last year on arguably one of the best of these functions when working with Lookup field controls - the Xrm.Page.getControl().addPreSearch method. Similar to other methods exposed via the SDK, its prudent and effective implementation can greatly reduce the amount of steps/clicks that are involved when populating Entity forms.\nI\u0026rsquo;ve already covered as part of last years post just what this method does, its sister method, addCustomFilter, and also some of the interesting problems that are encountered when working with the Customer lookup field type; a special, recently introduced field type that allows you to create a multi-entity lookup/relationship onto the Account and Contact entities on one field. I was doing some work again recently using these method(s) in the exact same conditions, and again came across some interesting quirks that are useful to know when determining whether the utilisation of these SDK methods is a journey worth starting in the first place. Without much further ado, here are two additional scenarios that involve utilising these methods and the \u0026ldquo;lessons learned\u0026rdquo; from each:\nPre-Filtering the Customer Lookup to return Account or Contact Records Only Now, your first assumption with this may be that, if you wanted your lookup control to only return one of the above entity types, then surely it would be more straightforward to just setup a dedicated 1:N relationship between your corresponding entity types to achieve this? The benefits of this seem to be pretty clear - this is a no-code solution that, with a bit of ingenious use of Business Rules/Workflows, could be implemented in a way that the user never even suspects what is taking place (e.g. Business Rule to hide the corresponding Account/Contact lookup field if the other one contains a value). However, assume one (or all) of the following:\nYou are working with an existing System entity (e.g. Quote, Opportunity) that already has the Customer lookup field defined. This would, therefore, mean you would have to implement duplicate schema changes to your Entity to accommodate your scenario, a potential no-no from a best practice point of view. Your entity in question already has a significant amount of custom fields, totalling more than 200-300 in total. Additional performance overheads may occur if you were to then choose to create two separate lookup fields as opposed to one. The entity you are customising already has a Customer lookup field built in, which is populated with data across hundreds, maybe thousands, of records within the application. Attempting to implement two separate lookups and then going through the exercise of updating every record to populate the correct lookup field could take many hours to complete and also have unexpected knock-on effects across the application. In these instances, it may make more practical sense to implement a small JScript function to conditionally alter how the Customer Lookup field allows the user to populate records when working on the form. The benefit of this being is that you can take advantage of the multi-entity capabilities that this field type was designed for, and also enforce the integrity of your business logic/requirements on the applications form layer.\nTo that end, what you can look at doing is applying a custom FetchXML snippet that prevents either Account or Contact records from returning when a user clicks on the control. Paradoxically, this is not done by, as I first assumed, using the following snippet:\nvar filter = \u0026#34;\u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt;\u0026lt;condition attribute=\u0026#39;accountid\u0026#39; operator=\u0026#39;not-null\u0026#39; /\u0026gt;\u0026lt;/filter\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;mycustomerlookupfield\u0026#34;).addCustomFilter(filter, \u0026#34;account\u0026#34;); This will lead to no records returning on your lookup control. Rather, you will need to filter the opposite way - only return Contact records where the contactid equals Null i.e. the record does not exist:\nvar filter = \u0026#34;\u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt;\u0026lt;condition attribute=\u0026#39;contactid\u0026#39; operator=\u0026#39;null\u0026#39; /\u0026gt;\u0026lt;/filter\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;mycustomerlookupfield\u0026#34;).addCustomFilter(filter, \u0026#34;contact\u0026#34;); Don\u0026rsquo;t Try and Pass Parameters to your addCustomFilter Function (CRM 2016 Update 1) If your organisation is currently on Dynamics CRM 2016 Update 1, then you may encounter a strange - and from what I can gather, unresolvable - issue if you are working with multiple, parameterised functions in this scenario. To explain further, let\u0026rsquo;s assume you have a Customer Lookup and a Contact Lookup field on your form. You want to filter the Contact Lookup field to only return Contacts that are associated with the Account populated on the Customer Lookup. Assume that there is already a mechanism in place to ensure that the Customer lookup will always have an Account record populated within it, and your functions to use in this specific scenario may look something like this:\nfunction main() { //Filter Contact lookup field if Customer lookup contains a value var customerID = Xrm.Page.getAttribute(\u0026#39;mycustomerlookupfield\u0026#39;).getValue(); if (customerID != null) { Xrm.Page.getControl(\u0026#34;mycontactfield\u0026#34;).addPreSearch(filterContactNameLookup(customerID[0].id)); } } function filterContactNameLookup(customerID) { var filter = \u0026#34;\u0026lt;condition attribute=\u0026#39;parentcustomerid\u0026#39; operator=\u0026#39;eq\u0026#39; value=\u0026#39;\u0026#34; + customerID + \u0026#34;\u0026#39; /\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;mycontactfield\u0026#34;).addCustomFilter(filter, \u0026#34;account\u0026#34;); } The above example is a perfectly sensible means of implementing this. Because, surely, it make more practical sense to only obtain the ID of our Customer Lookup field in one place and then pass this along to any subsequent functions? The problem is that CRM 2016 Update 1 throws some rather cryptic errors in the developer console when attempting to execute the code, and does nothing on the form itself:\nYet, when we re-write our functions as follows, explicitly obtaining our Customer ID on two occasions, this runs as we\u0026rsquo;d expect with no error:\nfunction main() { //Filter Contact lookup field if Customer lookup contains a value var customerID = Xrm.Page.getAttribute(\u0026#39;mycustomerlookupfield\u0026#39;).getValue(); if (customerID != null) { Xrm.Page.getControl(\u0026#34;mycontactfield\u0026#34;).addPreSearch(filterContactNameLookup); } } function filterContactNameLookup() { var customerID = Xrm.Page.getAttribute(\u0026#39;mycustomerlookupfield\u0026#39;).getValue()[0].id; var filter = \u0026#34;\u0026lt;condition attribute=\u0026#39;parentcustomerid\u0026#39; operator=\u0026#39;eq\u0026#39; value=\u0026#39;\u0026#34; + customerID + \u0026#34;\u0026#39; /\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;mycontactfield\u0026#34;).addCustomFilter(filter, \u0026#34;account\u0026#34;); } I\u0026rsquo;ve been scratching my head at why this doesn\u0026rsquo;t work, and the only thing I can think of is that the first function - main - would be executed as part of the forms OnLoad event, whereas the filterContactNameLookup is only triggered at the point in which the lookup control is selected. It\u0026rsquo;s therefore highly possible that the first instance of the customerID is unobtainable by the platform at this stage, meaning that you have to get the value again each time the lookup control is interacted with. If anyone else can figure out what\u0026rsquo;s going on here or confirm whether this is a bug or not with Dynamics CRM 2016 Update 1, then do please let me know in the comments below.\nConclusions or Wot I Think It could be argued quite strongly that the examples shown here in this article have little or no use practical use if you are approaching your CRM/D365E implementation from a purely functional point of view. Going back to my earlier example, it is surely a lot less hassle and error-prone to implement a solution using a mix of out of the box functionality within the application. The problem that you eventually may find with this is that the solution becomes so cumbersome and, frankly, undecipherable when someone is coming into your system cold. With anything, there always a balance should be striven for on all occasions and, with a bit of practical knowledge of how to write JScript functionality (something that any would-be CRM expert should have stored in their arsenal), you can put together a solution that is relatively clean from a coding point of view, but also benefits from utilising some great functionality built-in to the application.\n","date":"2017-04-02T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/more-adventures-in-pre-filtering-customer-lookup-fields-dynamics-crmdynamics-365-for-enterprise/","title":"More Adventures in Pre-Filtering Customer Lookup Fields (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"I did a blog post a few weeks ago discussing the new Dynamics 365 for Enterprise (D365E) exams that have been steadily coming out since the start of the year. At the time, I mused that there may be more to expect in the future exam-wise:\nI would hope that these exams are a stop-gap for a completely new range of exams that will be released in the future, that place the D365E application front and centre with the other much-loved favourites in the Microsoft \u0026ldquo;family\u0026rdquo;.\nAt the time of writing this, I had very much intended \u0026ldquo;future\u0026rdquo; to be a long time away - certainly 2018 at the earliest. With this being said, it was therefore incredibly surprising/pleasing to find out earlier this week of the introduction of two new certification types for D365E: The Microsoft Dynamics 365 MCSA and the Business Applications MCSE:\nMCSA and MCSE Certifications for Dynamics 365 https://t.co/GXwIIWtkfm\n\u0026mdash; Julian Sharp | MCT | MVP (@juliandynamics) March 21, 2017 Big thanks to CRM trainer extraordinaire, Julian Sharp, for posting about the release and bringing it to everyone\u0026rsquo;s attention. AX fans can also rejoice, as an MCSA has also been released that covers the new version of this application, rebranded as Dynamics 365 for Operations. Previously, AX was in the same boat as Dynamics CRM in being restricted to Specialist-level exams only.\nHow To Get the New Certifications Both the MCSA and MCSE are designed to fit around the current exam lists that have been released for Dynamics 365, specifically all of the ones that I have looked at previously on the blog. This means that, in practice, there is nothing additional or special that you need to prepare for; and, to be honest with you, CRM exam veterans should find little difficulty in obtaining the certifications. Here\u0026rsquo;s why, as I summarise what you need to do get each respective certification:\nMCSA: Pass exam MB2-715 (Microsoft Dynamics 365 customer engagement Online Deployment) and MB2-716 (Microsoft Dynamics 365 Customization and Configuration) MCSE: Obtain either the Dynamics 365 or Dynamics 365 for Operations MCSA and then pass an additional exam relating to a specific application area - for D365E, this can either be MB2-717 (Microsoft Dynamics 365 for Sales) or MB2-718 (Microsoft Dynamics 365 for Customer Service). Like with other MCSE\u0026rsquo;s, you will need to \u0026ldquo;keep up with the Joneses\u0026rdquo; each year by passing a fresh exam to maintain your certifications. At this stage, it is not clear what exam this will need to be, although I suspect it will be the exam that you didn\u0026rsquo;t pass the first time i.e. if you pass MB2-717, then you will need to pass MB2-718 the next year. So it might very well be the case that, if you have passed any or all of the above exams already, then there will be something new and shiny on your exam transcript from Microsoft. 🙂\nHow the certifications stack up against other MCSA\u0026rsquo;s/MCSE\u0026rsquo;s Looking at how the MCSA compares with some of the other ones out there, there is definitely less exam content that you need to thoroughly learn before sitting the exams. That being said, the Dynamics 365 MCSA is very much geared to the types of skills that are measured as part of other MCSA\u0026rsquo;s - namely, how to setup the application in question and how to manage it. It is only when you start to get into the upper echelons of the MCSE that you start to see specialisation in specific application areas. This is similar to how SQL Server Reporting Services is treated, as there is very little exam material covering this at SQL Server MCSA level. For D365E, this is a very good approach to take, as a lot of the information that you will need to learn as part of customizing CRM/D365E in the past is instantly applicable to the Sales/Case Management exams, and even gives you a head-start in making assumptions about how these system entities operate.\nWhen I look at the requirements needed for the MCSE, I can\u0026rsquo;t help but feel that the exam requirements are somewhat simple (without wanting to be too glib, given that I have yet to sit either exam!). Having said that, it does seem that the requirements for MCSE\u0026rsquo;s have been relaxed across the board and the re-certification requirements have also been overhauled to take into account the increased frequency of releases across the Microsoft range of products.\nWhy their introduction is so important The success of D365E and, indeed, the entire rebranding of the range of Dynamics applications depends solely on how the range of applications are perceived within the range of other products in the \u0026ldquo;Microsoft family\u0026rdquo;. If there is an imbalance at all anywhere in the chain, then customers who are evaluating the product are not going to take a second look at it. Because, let\u0026rsquo;s face it - if the organisation that is selling the product does not seem to care about it, why should you?\nI think back to a recent evaluation that I did of an anti-virus and device endpoint encryption product from one of the largest computer technology companies in the world. The product in question was acquired a few years back by the company and, when evaluating it, it was indeed an excellent and perhaps greatly innovative solution to have in place within an organisation. Our interest was killed quickly by the following factors:\nIt took weeks to arrange a demo. As we found out after the fact, the demo request went through the corporate maze, as no one could figure out who was responsible for carrying out pre-sales demos. After the demo was scheduled and completed, attempting to obtain pricing information was nigh on impossible; again, it went through the corporate maze and we gave up in the end due to the delay. The product itself was not mentioned in great detail on the organisation\u0026rsquo;s website, only a token page or 2 outlining what it is and what it does (in a very poor manner) All of the above stems from the fact the organisation was not actively behind the product at every single opportunity; even a weakness in one of these areas could make or break the success of a product when presented to potential customers.\nEducation and certification is an important element of this, as organisations can take comfort that a product has a range of effective and recognisable certifications that demonstrate an individuals or organisations competency in delivering solutions utilising it. From a Microsoft standpoint, the MCSA and MCSE are the gold standard of accreditations. If a Microsoft product does not have a corresponding MCSA at the very least, you can bet that it is not a great product or has not been given the love and attention needed to bring it to the forefront as part of potential sales opportunities. Now that we finally have MCSA and MCSE qualifications for D365E, we can now start to say definitively that this is the time for Dynamics 365. No longer is the product just the black horse contender for CRM/ERP king amongst the likes of SalesForce and Oracle; it is an established presence and very much here to stay.\n","date":"2017-03-26T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/evaluating-the-new-dynamics-365-for-enterprise-mcsa-and-mcse/","title":"Evaluating the new Dynamics 365 for Enterprise MCSA and MCSE"},{"content":"The ability to modify the values within Option Set fields on a Dynamics CRM/Dynamics 365 for Enterprise (D365E) form is, depending on your business scenario, one of the best reasons to learn how to write basic JScript form functions. Microsoft makes available 3 methods via the Xrm.Page.ui control (which can be handily shortcutted to Xrm.Page.getControl):\naddOption - Lets you add a new Option Set value, with the ability to optionally specify its position within the index. clearOptions - Removes all options from an Option Set field. removeOption - Allows you to remove a specific Option Set field value. With these options at your disposal, you can begin to leverage functionality to accommodate the following business logic:\nOnly allow an Option Set to be populated with values (either all or just specific ones) once information has been entered into other fields. Dynamically change the options available for selection, based on information entered within the primary entity form, or even a related one. Now the key takeaway with all of this is that, ultimately, your code cannot do anything that would make the \u0026ldquo;base\u0026rdquo; Option Set collection invalid. For example - let\u0026rsquo;s assume you have a field called Fruit with the Option Set values of Apple, Pear and Apricot. If you then tried to introduce the value Banana on the form level, your code will more than likely error when saving the record.\nFrom a CRM/D365E point of view, there are two additional field types which are also technically classed as Option Sets, thereby allowing you to utilise the above methods with them - the Status and Status Reason fields. If you look closely at the Status Reason field within the Customizations area of the application, you can see why: just like option sets, you specify a label and underlying value for it, that is dictated by your solution publisher:\nThe only difference worth noting is that, unlike normal Option Set fields, you have no choice when it comes to which underlying value is used. If your organisation is prone to changing Status Reason values often for an entity, you may begin to notice large gaps in Option Set values over time; annoying, but I can understand why it\u0026rsquo;s in place.\nAll of the above code examples should be sufficient for common scenarios, such as when you want to remove single option set values or are interacting with the Status Reason field on the main part of the form. What I wanted to do as part of today\u0026rsquo;s blog post is highlight two non-standard scenarios for the above and illustrate a solution for each - which, when brought together, demonstrate an example I was recently involved in developing a solution for.\nThe Problem The business in question was using an entity across multiple parts of their business. On the entity, there was then a flag field to indicate which side of the business the entity record belonged to. This would then assist when running reports and, when coupled with Business Unit segregation for the records, ensured that colleagues in the business only saw records that they needed to as part of their day-to-day usage of the application.\nBecause the entity was shared, fields within the entities - such as Status Reason - were also shared and, by implication, contained values that were only relevant to a specific part of the business. We were therefore tasked with finding a solution to ensure that the Status Reason value list was modified for each person to reflect the record type they were working with. Colleagues primarily worked within the Web Application and the Status Reason field was on each form, within the Header area.\nSolution #1: Efficiently Removing Multiple Option Set Values When we take a look at the code examples, we are given pretty much everything we need to start implementing a solution. So if we assume that our Status Reason field contains 15 values, 10 of which we want to remove, we may be tricked into writing the following code and doing the good ol\u0026rsquo; copy \u0026amp; paste:\nXrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000000); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000001); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000002); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000003); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000004); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000005); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000006); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000007); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000008); Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(100000009); Now, this code will work fine and can be pretty clearly deciphered. The problem lies in the number of lines it is expressed in (thereby increasing load times/processing time when CRM is processing your function). We can get round this by thinking back to Programming 101 and making the correct assumption that JScript has the ability to loop through similar commands - in this case, via the for Loop. By introducing an array into the mix as well, we can then express our code much more simply:\nvar statusCodes = [100000000, 100000001, 100000002, 100000003, 100000004, 100000005, 100000006, 100000007, 100000008, 100000009]; for (index = 0; index \u0026lt; statusCodes.length; ++index) { Xrm.Page.getControl(\u0026#39;statuscode\u0026#39;).removeOption(statusCodes[index]); } Solution #2: Working with Header Controls A.K.A. Why I Hate \u0026lsquo;Object reference not set to an instance of an object\u0026rsquo; Errors I perhaps should have prefaced the above when I said \u0026lsquo;Now this code will work fine\u0026rsquo;. 🙂 Going back to the task at hand, when I attempted to deploy a version of the above code into the target environment, I immediately got the dreaded error message referenced above. This is generally the hallmark error to steer you towards checking your code for any typos, as the error message is basically telling you it can\u0026rsquo;t find the thing you are asking it to - in this case, the field statuscode.\nAfter triple-checking the exact spelling of statuscode and making sure the field was on the form, I did some further diving into Developer Tools with the form loaded to figure out just what the hell was going wrong. When performing a search for the statuscode field on the DOM Explorer, I noticed that I got a few hits for statuscode - but that it was prefaced with the value header_. I followed this up with some further research online, which confirmed that, if fields in the Header need to be referenced, then they should be prefaced with header_. So our for Loop example would need to look as follows instead:\nfor (index = 0; index \u0026lt; statusCodes.length; ++index) { Xrm.Page.getControl(\u0026#39;header_statuscode\u0026#39;).removeOption(statusCodes[index]); } Bringing it All Together After encapsulating all of the above into a single function, removeStatusCodes(), we can then call the following snippet as part of an OnLoad function to hide our Status Reason values if the flag field is set to the correct value:\nif (Xrm.Page.getAttribute(\u0026#39;new_myflagfield\u0026#39;).getValue() == \u0026#34;100000000\u0026#34;) { removeStatusCodes(); } This will then work as intended and ensure that end-users only see the Status Reason values that directly concern them; no doubt assisting greatly in making the system as streamlined as possible to use and to avoid any potential for data-entry problems further down the line.\n","date":"2017-03-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/removing-multiple-status-reason-values-in-the-header-area-dynamics-crmdynamics-365-for-enterprise/","title":"Removing Multiple Status Reason Values in the Header Area (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"Perhaps one of the best pieces of news arising from the detail behind the Dynamics 365 for Enterprise (D365E) announcement was the introduction of a number of freebies that are included as part of any subscription. Previously, under CRM Online licensing, you would have to purchase a number of Professional licenses before getting any free items. Now, regardless of the number of user licenses in your subscription, new and existing customers get all of the following at no additional charge:\n10GB of total database storage across all of your instances (Production/Sandbox) Free Sandbox instance Free Portal Add-on services, such as Microsoft Flow and Project Online. Given that the monthly cost for most of the above previously amounted to a significant figure in the hundreds of £\u0026rsquo;s, it is good to now see that arguably essential subscription elements (for example, Sandbox instances, to ensure a separate development/testing environment for customisations) are included at no additional charge. The question of whether or not this is value for money, however, is something that will likely depend on the size of your D365E deployment and the number/type of user licenses involved.\nWith this in mind, there is doubtless a high amount of impetus to encourage organisations who are currently on Dynamics CRM 2016 Update 1 or earlier and/or are still on the \u0026ldquo;old\u0026rdquo; Office 365 SKUs for CRM Online to start migrating across to one of the new plans - either Enterprise Plan 1, Dynamics 365 for Sales/Field Service etc. or Team Members. Organisations who are still on CRM Online plans can choose to either upgrade now or when their plan retires, in addition being able to take advantage of transition upgrade pricing. In some cases, this can amount to an average reduction in monthly prices of up to 20%. Therefore, if you are currently paying for excess storage at £7.50 per GB per month, additional sandbox instances at £113.10 per month and a Portal instance at a whopping £377 per month, then there may be a good business case for not waiting until renewal and to upgrade straight away to the new D365E SKU\u0026rsquo;s.\nThis was certainly the case with an organisation I was working with previously. The deployment was rather small in nature, approximately 60-70 users in total. The majority of licenses were allocated towards Basic and Essential license types. As such, the organisation was unable to benefit from the previous offer of a free sandbox instance with 25 CRM Professional licenses and additional, free database storage with more Professional users; meaning that they had to buy 2 additional sandbox instances and a large amount of additional storage to cover their requirements as part of the solution. The organisation was, therefore, an excellent candidate to transition across their CRM Online Professional to Plan 1 Enterprise license types.\nWith the above TechnNet article open and during an appropriate out of hours timeslot, I was then tasked with carrying out the license migration. I was relieved that the process went largely to plan, with no major hiccups. I was able to confirm successfully that the free sandbox instance appeared within the Dynamics 365 Administration centre after purchasing the new plans and did not vanish when cancelling the CRM Online Professional subscription. However, the same could not be said for the free storage. In the past, myself and colleagues have observed that purchasing additional storage can take some time to appear on the Dynamics 365 Administration Centre - sometimes up to an hour or more. After noting no change in the storage count after waiting this long, it definitely looked as if something had gone wrong as part of the upgrade. 😕\nAfter temporarily adding on some additional storage to cover the amount that we expected to gain as part of the transition and waiting until the next weekday, I opened a support ticket with Microsoft to clarify the situation regarding the additional 5GB of free storage we were expecting to receive and to determine whether something had in fact gone wrong. In good time, I was duly informed the following by a Microsoft support representative:\nWe informed you that \u0026ldquo;to get 5gb free storage for their organisation, update of CRM organisation to 8.2 is recommended and you will receive this free storage whenever your organization will be updated\u0026rdquo;.\nAt this juncture, it should be noted that one crucial piece of information has been left out as part of the above 🙂 All of the organisations CRM instances were at version 8.1. According to the above then, one (or all) of the organisations Dynamics CRM 2016 Update 1 instances need to be upgraded to D365E to take advantage of the free storage offer.\nSince the above incident, we have scheduled in all of the organisations instances to be upgraded to 8.2 i.e. to D365E. I will report back after this upgrade has been completed to confirm the presence (or lack thereof) of the additional 5GB storage. Why the scepticism? I find it rather strange that you have to upgrade all of your instances to the latest version to take advantage of the new storage offer. I was always under the impression that the storage \u0026ldquo;layer\u0026rdquo; of your CRM/D365E instances are separate from the instances themselves. This being the case, I had therefore assumed that the SKU change would have been the flag to tell Administration Centre to add on an additional 5GB storage. This would appear to be how the additional Sandbox instance worked because, as demonstrated above, the free instance updated into Administration Centre without issue. The world of CRM/D365E can always throw up interesting and bizarre behaviours, so I won\u0026rsquo;t rule anything out at this stage. Stay tuned\u0026hellip;\nUPDATE (28/03/2017): Well, I can confirm that the support engineer is correct and at least one of your instances as part of your subscription needs to be running 8.2 for the free additional storage to appear on the portal. So if you have a spare instance and are itching to get your hands on additional database space, then I would recommend that you reset one of your spare sandbox instances to the latest version via the Administration Centre.\n","date":"2017-03-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/claiming-your-free-additional-storage-dynamics-365-for-enterprise/","title":"Claiming Your Free Additional Storage (Dynamics 365 for Enterprise)"},{"content":"In the world of Dynamics CRM/Dynamics 365 for Enterprise (D365E), the start of the year generally means the introduction of new exams in line with the latest version of the product. 2017 is no exception to this rule and, at the time of writing, there are 4 new D365E exams that you can start to get your teeth into:\nMB2-715: Microsoft Dynamics 365 customer engagement Online Deployment MB2-716: Microsoft Dynamics 365 Customization and Configuration MB2-717: Microsoft Dynamics 365 for Sales MB2-718: Microsoft Dynamics 365 for Customer Service Microsoft appears to be drip feeding the current wave of exams this time around: the first exam to popup was MB2-716 at the start of February, with the remaining exams cropping up over the last week or so. What\u0026rsquo;s also worth noting is that the current exam list for D365E is only viewable via the US Microsoft Learning site; if you are UK based like me, then the current Dynamics certification page makes no mention of the new exams\u0026hellip;yet. I seem to remember this being a problem last year as well and, like back then, you can still book your exam and sit it in your country of choice by simply going through the US Microsoft Learning Website.\nExams present a good opportunity to re-familiarise yourself with areas of a particular product that you have not had much exposure to previously, as well as introducing you to anything new that has been introduced over the past year. In this week\u0026rsquo;s blog post, I will take a closer look at the new exams and the differences that new and previous candidates should make note of before preparing to revise.\nCustomer Organizational Structure: What It Is and Why You Shouldn\u0026rsquo;t Worry Both the MB20715 and MB2-717 dedicate a significant percentage of exam performance on a candidate\u0026rsquo;s ability to \u0026lsquo;Create a Customer Organizational Structure\u0026rsquo;. Exam veterans may initially be put off by this terminology, as it is not something that has ever been referenced previously. Upon closer inspection of both exams, the skills measured differs, compounding any potential confusion. Fortunately, the top-level terminology is more confusing than what is underneath. To simplify things for those who may be still scratching their heads, here is a breakdown for each exam of what you will need to focus on:\nMB2-715 Support the Microsoft Dynamics 365 client environment: This covers things such as knowing which browsers are compatible with D365E, which mobile devices/operating systems that the mobile app support and also minimum software/hardware requirements for the D365E App for Outlook (Note: this is NOT the same as the Dynamics 365 for Outlook). Deploy Microsoft Dynamics: This will likely cover what license types are available, what permissions they grant across the application and also what features you get as part of a subscription. For example, remember that subscriptions now include a free sandbox and 10GB database storage. Import Data into the Microsoft Dynamics 365 Environment: This will cover the Data Import Wizard and all its subtle nuances, as well as the new Data Loader service (surprising, given that it is still in preview apparently). Manage the Microsoft Dynamics 365 Environment: This is likely to cover all of the Office 365 \u0026ldquo;soft skills\u0026rdquo; that are required as part of managing D365E Online and, rather interestingly, Single Sign-On (SSO) via Active Directory Federation Services (ADFS) - something that has only ever come up as part of On-Premise exams previously. MB2-717 Manage Customer Records: This will include topics covering your \u0026ldquo;basic\u0026rdquo; record types and how they behave (Accounts, Contacts etc.), as well as having to demonstrate knowledge of Business Units and how to structure the application to match a business hierarchy. Manage the Sales Process: This will cover your full sales qualification process - Lead to Opportunity to Quote to Order to Invoice - and how these record types interact, the unique behaviours of each and potentially some stuff covering Business Process Flows. Manage Customer Communication: Same as the above, this will be focused towards knowledge of Opportunity and Lead records. There may also be a sneaky question or 2 about Social Engagement chucked in, based on the terminology used. Manage sales literature and competitors: This will include working with document templates, the Competitor record type and potentially some questions regarding Connections and Connection Roles. So on balance, nothing too scary as part of the above for those who have sat previous exams. That\u0026rsquo;s why it\u0026rsquo;s always important to dig deeper behind a headline to get the true story underneath!\nAnd It\u0026rsquo;s Goodnight From Me: Saying Farewell to the On-Premise Exam One notable absentee from the list of new exams is the On-Premise Installation exam. The previous exam for 2016, MB2-711, demonstrates a candidate\u0026rsquo;s proficiency in installing and administrating the On-Premise version of Dynamics 2016; something which, based solely on my own experience managing an on-premise lab environment, is no small feat. Now it very much looks if this exam has gone the way of the Dodo. As highlighted by legendary CRM/D365E MVP Mark Smith, there is currently no content on the Dynamics Learning Portal/Imagine Academy that covers On-Premise installation of D365E.\nAlthough the retirement of this exam type (if true) does come with some drawbacks for those who may be tasked with supporting on-premise versions of the application in the near future, it is perhaps not surprising. The key thing that Microsoft have been trying to highlight as part of the D365E release is the clear benefit of the cloud version of the product over its companion, self-hosted versions. This is why Microsoft have been offering incentivised upgrade pathways, sprinkled with a generous helping of price reductions, to motivate organisations to move to the Online version of the product. Whilst On-Premise D365E will continue to have a role to play in the months and years ahead - which is why Microsoft offer Dual Use Rights with online subscriptions (see below) - its role will be relegated to merely providing organisations with an offline mechanism for deploying development/test environments within their own infrastructure.\nWith regards to some of the topic areas covered by the former On-Premise Installation exam - such as Server-Side-Synchronisation and CRM for Outlook - you can be assured/annoyed at the fact that these topics are picked up within the new MB2-715 exam instead. So don\u0026rsquo;t take these subjects too lightly when revising. 🙂\nMissed Opportunities As we welcome the new exams and what they can offer, they also present an opportunity to evaluate what is missing and what could be improved upon in the future. With this in mind, here are a few things that are a shame to be have been missed as part of this wave of exam releases:\nWith the retirement of MB2-701: Extending Microsoft Dynamics CRM 2013 at the end of the last year, the death knell was signalled for Developer CRM/D365E certifications. With no current exam on the horizon to replace MB2-701, this presents a major missed opportunity. Familiar readers of the blog will know that I have railed against this in the past, chiefly for the reason is that it creates a lack of incentivisation for existing functional CRM consultants or developers new to the product to take a dive and learn what is possible via the platform through coding. I hope that this is eventually addressed and that we see an Extending Dynamics 365 exam or similar released in future. I did a post last year discussing the possible imminence of a CRM Portal exam, based on evidence garnered from the Adxstudio website. CRM Portals is such a huge product in of itself, that presents its own unique blend of learning curves and challenges when coming from a purely CRM-focused background. Having an exam dedicated solely to this presents, in my view, the surest way pathway for those interested in implementing the product as part of future projects to get running with it. This being the case, it is a shame that a Portal Exam has not yet been included as part of the above list. I did hear some rumours last year that Microsoft was planning on \u0026ldquo;resetting\u0026rdquo; the current state of affairs regarding CRM/D365E exams and their status within the Microsoft certification hierarchy. Unlike the \u0026ldquo;big hitters\u0026rdquo; in the Microsoft range of products, such as Azure and Office 365, which have MCSA/MCSE level qualifications, CRM/D365E have continually been relegated to Microsoft Specialist level for each of the exams passed; something which, I have to admit, does not look as snazzy on your C.V. :( I was hoping that with the love and attention shown to CRM last year as part of the D365E rebranding, that we would see a brand new D365E MCSA released. Perhaps this may happen in the future, as I believe this is one of the ways that Microsoft can clearly signal the importance of D365E moving forward. Conclusions or Wot I Think I have yet to sit any of the new exams, although it is something that I am tentatively planning for over the next couple of months. It will be interesting to see how the experience differs compared to previous exams, if at all. Despite the rebranding, the content of these exams feels to be very safe on balance; i.e. the structure is largely identical compared to their equivalent 2016 version, with some slight peppering of new content to cover some of the muted new features within the product. Some new features appear to have been left out altogether - for example, there is no specific mention of some of the new Process updates or even the new built-in Sitemap editor. I would hope that these exams are a stop-gap for a completely new range of exams that will be released in the future, that place the D365E application front and centre with the other much-loved favourites in the Microsoft \u0026ldquo;family.\u0026rdquo; This would also have the added benefit of providing candidates with the opportunity to more clearly specialise within non-traditional areas of the application.\n","date":"2017-03-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/whats-new-in-the-dynamics-365-for-enterprise-specialist-exams/","title":"What's New in the Dynamics 365 for Enterprise Specialist Exams"},{"content":"A software upgrade/update always starts out with such good intentions and hopeful optimism. There are perhaps two opposing schools of thought that emerge when it comes to the merits and necessity of always ensuring you\u0026rsquo;re on the latest version of an application. The arguments for and against will generally look something like this:\nIn Favour\nHaving the latest version protects against potential exploits or security vulnerabilities present within an older version of the application. The latest version of the application is easier to use, supports X feature etc. The application will be end of life in the very near future, so we need to be proactive as opposed to reactive* Against\nThe upgrade will take time to complete - we will have to perform in-depth testing and schedule time outside of normal business hours to carry this out. As part of the XXX update, X feature is no longer available and we have to instead use Y feature instead. The current version of the application works just fine - upgrading could cause us problems as a result of a bad security patch or similar. *\u0026ldquo;very near future\u0026rdquo; is becoming more commonplace these days, particularly with Microsoft Cloud products. For example, Office 2013 is technically considered end of life for ProPlus customers at the end of this month.\nWhilst I would argue strongly that keeping your important business applications up to date should always be an ultimate priority, the reality is less straightforward and even I have fallen foul of this in the past. Case in point: I recently upgraded to the Windows 10 Anniversary Edition from Windows 10, on a personal machine that had Hyper-V installed and a number of virtual images. The update went fine, but I was informed after the update that the latest version of .NET Framework needed to be downloaded. I dismissed the error message, as I was in the middle of something else, and then spent many hours later on attempting to figure out why the update had removed the Hyper-V program feature from my machine; after researching, I determined it was because of the prompt I had received when first booting up Windows and that the updated version of Hyper-V required the latest .NET Framework. I was able to get the role installed and re-configure all of my virtual images accordingly, but it did take some time and was definitely an unwelcome distraction! Suffice to say, an upgrade can never go exactly to plan, which is why I would always encourage the need for dedicated testing environments within your business for your primary IT systems. This will grant that you sufficient latitude to perform all required testing of an update and to give you the confidence that it can be deployed safely into your production environment(s).\nOf course, the above does not help very much if you are upgrading your test environment and everything goes wrong there, such as what happened to me recently. The business in question was wanting to upgrade from Visual Studio 2013 to Visual Studio 2015. Their development environment was a virtualised, remote desktop server, which all of the developers logged into as their primary working environment. Development was carried out using the \u0026ldquo;out of the box\u0026rdquo; templates included in Visual Studio (C#, ASP.NET etc.) and also using SQL Server Data Tools for BIDS/SSIS development. All projects/solutions were stored in a central Git repository.\nThe process of installing Visual Studio 2015 and the current \u0026ldquo;production ready\u0026rdquo; 16.5 version of SQL Server Data Tools for Visual Studio 2015 went (rather too) swimmingly, and we began to run tests confirming that all Team Services projects opened without issue. We immediately came across an issue when attempting to open certain .rdl report files - Visual Studio would hang for about 10-15 minutes every time the report was opened, with the following prompt stuck on the screen and the application remaining in a non-responsive state:\nThe report would open fine in the end, but the issue was repeated whenever the report was re-opened.\nWe initially tried the following in an attempt to resolve the problem:\nRe-cloned the repository - no joy. Attempted to open the report from VS 2013 - the report opened fine without issue, so definitely a problem with VS 2015 Created a brand new Report Project template in VS 2015, added the report into the project (both as a copy and as a new report, with the underlying .xml definition copy + pasted) and then tried re-opening - the same issue occurred. Being officially stumped at this juncture, I then did some further research online to see whether anyone else had encountered the same issue. Fortunately, I came across the following TechNet thread which contained the exact same symptoms we were experiencing:\nhttps://social.technet.microsoft.com/Forums/sqlserver/en-US/ba55ce1b-0bac-4997-9e02-8748dfd38fae/opening-large-reports-in-ssrs-2016-takes-a-long-time-after-migratting-from-ssrs2012?forum=sqlreportingservices\u0026amp;prof=required\nThe thread seemed to attract some confused answers (in the sense that they didn\u0026rsquo;t grasp the underlying problem), before petering out with no apparent solution. Without holding my breath too much, I replied to the thread in the hopes of getting a definitive answer, which I received in almost record time:\nYes we did, we got a fix from Microsoft: https://go.microsoft.com/fwlink/?linkid=837939. After installing the reports where opening fine.\nNot wishing to look a gift horse in the mouth at all, I did first double check the contents of the link to verify it - and it turned out to be 17.0 RC2 of SQL Server Data Tools for Visual Studio 2015. What\u0026rsquo;s worth noting is that the first hit on Google for SQL Server Data Tools Visual Studio 2015 is the download page for version 16.5 and not the following page that contains links to both versions:\nhttps://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt\nThose who have already read through the TechNet thread will know how things panned out in the end, but just to summarise - installing this fixed the issue. So major credit to Erwin de Kreuk for doing all of the \u0026ldquo;hard work\u0026rdquo; to finding the solution in this unusual case and in responding so quickly to my forum post. This is definitely a great example of the old adage \u0026ldquo;You don\u0026rsquo;t ask, you don\u0026rsquo;t get\u0026rdquo; and how the wider community can often prove invaluable when resolving an especially troublesome IT issue.\nSo what should you do if you are planning to upgrade SSDT from Visual Studio 2013 to Visual Studio 2015?\nThe key takeaway from the above should be the fact that a release-candidate version of SSDT provided a resolution to the problem at hand. It would, therefore, be foolhardy to recommend a general upgrade from VS 2013 whilst 17.0 remains at a release-candidate version. Given that this version was released at the start of the year, it is highly likely to expect that a production-ready version of 17.0 will be released in the very near future. I would recommend holding off on your upgrade if your organisation works with a large number of SSRS reports, lest you also fall foul of this surprisingly strange bug.\n","date":"2017-02-26T00:00:00Z","image":"/images/VisualStudio-FI.jpg","permalink":"/rdl-report-loading-issues-sql-server-data-tools-in-visual-studio-2015/","title":".rdl Report Loading Issues (SQL Server Data Tools in Visual Studio 2015)"},{"content":"Those who have experience working with an RDMS system like SQL Server will become accustomed towards a certain way of going about things. These can often involve a mixture of \u0026ldquo;lazy\u0026rdquo; query writing (e.g. using SELECT *\u0026hellip; as opposed to SELECT Column1, Column2\u0026hellip;), the manner in which you write your query (ALL CAPS or lower case) and best practice approaches. One arguable example of a best practice approach is the use of Stored Procedures. An illustration of how to use a Stored Procedure can most readily demonstrate their benefits. Take a look at the T-SQL query below, which should execute fine against the AdventureWorks2012 sample database:\nSELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name] FROM [HumanResources].[Employee] AS E INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH ON E.[BusinessEntityID] = DH.[BusinessEntityID] INNER JOIN [HumanResources].[Department] AS D ON DH.[DepartmentID] = D.[DepartmentID] INNER JOIN [Person].[Person] AS P ON E.[BusinessEntityID] = P.[BusinessEntityID] WHERE DH.[EndDate] IS NULL AND E.[JobTitle] = \u0026#39;Production Technician - WC50\u0026#39; The query returns the data we need, but not in an efficient manner. Consider the following:\nExecuting a query like the above, in-code, as part of an end-user application could expose your database to the risk of an SQL Injection attack or similar. The query compromises a lot of information regarding our underlying database structure, information which any underlying client executing the query neither cares or should have to worry about. The example is a very precise query, with a specific function - i.e. get me all the current employees who have the Job Title of Production Technician - WC50. If we wanted to modify it to instead obtain all Senior Tool Designers, we would have to write a completely separate query to accommodate this. Implementing a Stored Procedure to encapsulate our query logic immediately addresses the above concerns, by providing us with a single-line query into the database, giving us just the data we need and enables us to utilise the query for other scenarios as well. Setting one up is very straight forward via the CREATE PROCEDURE command - the rest is pretty much what we have put together already:\nCREATE PROCEDURE dbo.uspGetEmployeesByJobTitle @JobTitle NVARCHAR(50) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name] FROM [HumanResources].[Employee] AS E INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH ON E.[BusinessEntityID] = DH.[BusinessEntityID] INNER JOIN [HumanResources].[Department] AS D ON DH.[DepartmentID] = D.[DepartmentID] INNER JOIN [Person].[Person] AS P ON E.[BusinessEntityID] = P.[BusinessEntityID] WHERE DH.[EndDate] IS NULL AND E.[JobTitle] = @JobTitle END GO By utilising a parameter for our WHERE clause filter on the Job Title, we can pass any valid value back to our stored procedure, immediately making our initial query more versatile across our reporting/business application. And, as a primary bonus, we can now safely take a 10 line query down to 1:\nEXECUTE dbo.uspGetEmployeesByJobTitle @JobTitle = \u0026#39;Senior Tool Designer\u0026#39; So we have established that Stored Procedures are wicked cool awesome - but what does this have to do with Power BI?!? Having worked with SQL Server Reporting Services (SSRS) extensively in the past, I have become accustomed to using Stored Procedures as a mechanism for storing underlying query logic within the database and having a straightforward means of referencing this from my .rdl file. I can only assume from this that this is the \u0026ldquo;norm\u0026rdquo; and preferred method of querying SQL data, as opposed to a direct SELECT statement.\nWhen recently doing some work within Power BI involving Azure SQL Databases, I was, therefore, surprised that there was no option to return data via a stored procedure as default. Instead, Power BI would prefer me to directly query the underlying table/view objects:\nThankfully, when inspecting the underlying Power Query used to return an example table from the above, it doesn\u0026rsquo;t use any kind of SELECT query to get the data:\nlet Source = Sql.Databases(\u0026#34;mydatabaseinstance\u0026#34;), AdventureWorks2012 = Source{[Name=\u0026#34;AdventureWorks2012\u0026#34;]}[Data], Production_ProductModel = AdventureWorks2012{[Schema=\u0026#34;Production\u0026#34;,Item=\u0026#34;ProductModel\u0026#34;]}[Data] in Production_ProductModel Unfortunately, the same cannot be said for if you select the Advanced options area and input your own SQL query directly:\nlet Source = Sql.Database(\u0026#34;mydatabaseinstance\u0026#34;, \u0026#34;AdventureWorks2012\u0026#34;, [Query=\u0026#34;SELECT P.[FirstName], P.[LastName], E.[JobTitle], E.[HireDate], D.[Name]#(lf)FROM [HumanResources].[Employee] AS E#(lf) INNER JOIN [HumanResources].[EmployeeDepartmentHistory] AS DH#(lf) ON E.[BusinessEntityID] = DH.[BusinessEntityID]#(lf) INNER JOIN [HumanResources].[Department] AS D#(lf) ON DH.[DepartmentID] = D.[DepartmentID]#(lf) INNER JOIN [Person].[Person] AS P#(lf) ON E.[BusinessEntityID] = P.[BusinessEntityID]#(lf)WHERE DH.[EndDate] IS NULL#(lf)AND E.[JobTitle] = \u0026#39;Senior Tool Designer\u0026#39;\u0026#34;]) in Source I do NOT recommend you use the above method to query your SQL Server data!\nI have spoken previously on the blog in respect to conventions around working with datasets i.e. only get what you need, and nothing else. As I work more and more with Power BI, the tool very much seems to be geared towards flipping this mentality on its head. Power BI has a number of built-in tools that seem to scream out \u0026ldquo;Just get ALL your data in here, we\u0026rsquo;ll worry about the rest!\u0026rdquo;. I realise that the difference between MB and GB these days, from a storage/cost point of view, is minimal; nevertheless, I still believe it is prudent not to put all your eggs in one basket and ensure that your business data is not being stored cavalier-esque within a multitude of different cloud services.\nWith this in mind, it is good to know that you can utilise stored procedures in Power BI. You basically have two ways in which this can be achieved:\nGoing back to the Advanced options screen above on the SQL Server database wizard, you can EXECUTE your stored procedure directly using the following SQL Statement: DECLARE @SP VARCHAR(100) = \u0026#39;dbo.uspGetEmployeesByJobTitle @JobTitle = \u0026#39;\u0026#39;Senior Tool Designer\u0026#39;\u0026#39;\u0026#39; EXEC (@SP) Be sure to specify your database and don\u0026rsquo;t forget the double quotes!\nIf you prefer to use Power Query as opposed to the wizard above, then the following code will also work: let Source = Sql.Database(\u0026#34;mydatabaseinstance\u0026#34;, \u0026#34;AdventureWorks2012\u0026#34;, [Query=\u0026#34;DECLARE @SP VARCHAR(100) = \u0026#39;dbo.uspGetEmployeesByJobTitle @JobTitle = \u0026#39;\u0026#39;Senior Tool Designer\u0026#39;\u0026#39;\u0026#39;#(lf)EXEC (@SP)\u0026#34;]) in Source In both cases, you will be required to authenticate with the database and your result set should return as follows if using the AdventureWorks2012 example database/code:\nFinally, as a best-practice security step, you should ensure that the account connecting to your SQL Server instance is restricted to only EXECUTE the procedures you have specified. This can be achieved via the following T-SQL snippet, executed against your database instance:\nGRANT EXECUTE ON OBJECT::dbo.uspMyStoredProcedure TO MySQLServerLogin Conclusions or Wot I Think\nPower BI is increasingly becoming a more relevant tool for traditional Business Intelligence/Reporting Services experienced professionals. The bellwether for this can surely be seen in the current Technical Preview for SQL Server Reporting Services, which includes Power BI reports built-in to the application. Although we have no timescales at this stage at when the next major version of SQL Server will be released, it is reasonable to assume by the end of this year at the earliest, bringing Power BI reports as a new feature. I am really excited about the introduction of Power BI into SSRS, as it would appear to be a match made in heaven - giving an opportunity for those with experience in both products the flexibility to develop a unified, best of breed solution, using traditional .rdl reporting capability and/or Power Query/DAX functionality.\nWith the above on the horizon, the importance of being able to integrate seamlessly with SQL Server and having support for traditional/well-proven practices become crucial indicators of whether this match will be over before the honeymoon. And so, I would hope to see the option to access SQL Server data via Stored Procedures become standard when using the built-in data connector within Power BI. Based on the feedback I have seen online, I\u0026rsquo;d warrant towards how welcome this feature could be and an excellent way of reducing the need for direct coding to achieve a common requirement within Power BI.\n","date":"2017-02-19T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/utilising-sql-server-stored-procedures-with-powerbi/","title":"Utilising SQL Server Stored Procedures with Power BI"},{"content":"For those who are well versed in rolling out solution updates within Dynamics CRM/365 for Enterprise (CRM/D365E), the process will always have a certain familiarity to it, with a few surprises rolled in now and again. Often, the update will proceed as anticipated; sometimes, you may encounter bizarre issues. I can remember a particularly strange incident I had last year where a solution import would get to about 90-95% completion\u0026hellip;and then the green progress bar would suddenly start rolling back to nothing. The import progress window would then hang with no further guidance or error message. To try and determine the root cause, we had to interrogate the importjob entity within the system, which ended up showing the import job progress stuck at 0.15846281% 😕 In the end, we had to escalate the issue to Microsoft for further investigation, but rest assured that if you have not yet witnessed your own curious solution import experience, it\u0026rsquo;s definitely in the post 🙂\nThankfully, if you are new to the whole \u0026ldquo;rolling out solution update\u0026rdquo; thing, you can be assured that the process is relatively straightforward, and mostly without issue. If you have been handed a set of solution import instructions for the first time, though, you may be wondering why something similar to the following step is included:\nGo into the Data Management -\u0026gt; Duplicate Detection Rules page and click Publish on all Duplicate Detection Rules that have a Status Reason of Unpublished\nUnfortunately, after importing a solution update, CRM/D365E will automatically unpublish all of your Duplicate Detection Rules automatically. You are therefore required to explicitly publish them again, lest you start to encounter a sudden increase in duplicate records and database storage within your system. The reason why this happens is both understandable and frustrating in equal measure. As outlined in the following MSDN article on the subject:\nA duplicate rule condition specifies the name of a base attribute and the name of a matching attribute. For example, specify an account as a base entity and a contact as a matching entity to compare last names and addresses\nAs part of the above, explicit matchcodes are created for every record that the Duplicate Detection Rule is targeting, based on the current metadata of your CRM/D365E entities and attributes. Because your solution update can potentially alter significant aspects of this metadata, the system automatically unpublishes all Duplicate Detection Rules as a precaution.\nThe above is perhaps trivial in nature, as the actual process of re-publishing all Duplicate Detection Rules is somewhat negligible in effort terms. Where difficulties can arise is if someone innocently overlooks this part of the process or if your system has many different Duplicate Detection Rules, in a mixture of Unpublished/Published state. You would have to specifically make a note of which rules were Published before beginning your solution import so that you can ensure that the correct rules are published after the fact. I would have thought that after so many versions of the product, that something would be added to address this - for example, perhaps a checkbox at the start of the Solution Import Wizard that lets you specify whether all currently published rules should be reactivated after the import completes successfully.\nIf you find that the above is an annoyance that you can do without no longer, like with many things on the platform, there is a solution that can be deployed in code. The SDK exposes the PublishDuplicateRuleRequest class, which does exactly what it says on the tin - meaning that you can write a plugin that applies this functionality accordingly. The tricky bit comes in determining which Message (i.e. action) on the platform that you wish to run this against. CRM/D365E does not expose a SolutionUpdate or SolutionImport message that we can piggy-back onto, so we have to look at the PublishAll message instead - the action that is triggered when you press Publish All Customizations in the system. This is because this is generally the action you will always need to take when importing an (unmanaged) solution. As a result, we can write a plugin class that is triggered on the Post-Operation event of this entity to automatically publish all Unpublished Duplicate Detection Rules in the system!\nThe snippet below is adapted from the sample code provided by Microsoft, but has been tweaked as follows:\nA QueryExpression is used as opposed to QueryByAttribute, since we need to query on two separate attributes and their values - statecode and statuscode. You also cannot return an easily accessible count on all results returned with QueryByAttribute. We will see why is useful in a few moments. The code explicitly checks for if there are any Unpublished rules first before attempting to proceed further - no point in running code unnecessarily! Instead of activating each rule one-by-one using an Execute request, all of the requests are collected together as part of an ExecuteMultipleRequest, given that we now know the performance benefits that this can have. Tracing has been implemented in liberal amounts, to provide remote debugging from within CRM/D365E. Here\u0026rsquo;s the code - just copy into an empty class file on your plugin project, modify the namespace to reflect the name of your project and you will be good to go!\nusing System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; using Microsoft.Xrm.Sdk.Messages; using Microsoft.Crm.Sdk.Messages; namespace MyPlugin.Plugins { public class PostPublishAll_PublishDuplicateDetectionRules : IPlugin { public void Execute(IServiceProvider serviceProvider) { //Obtain the execution context from the service provider. IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); //Get a reference to the Organization service. IOrganizationServiceFactory factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory)); IOrganizationService service = factory.CreateOrganizationService(context.UserId); //Extract the tracing service for use in debugging sandboxed plug-ins ITracingService tracing = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracing.Trace(\u0026#34;Tracing implemented successfully!\u0026#34;); if (context.MessageName == \u0026#34;PublishAll\u0026#34;) { PublishRules(service, tracing); } } private void PublishRules(IOrganizationService service, ITracingService tracing) { EntityCollection rules = GetDuplicateDetectionRules(service); tracing.Trace(\u0026#34;Obtained \u0026#34; + rules.TotalRecordCount.ToString() + \u0026#34; duplicate detection rules.\u0026#34;); if (rules.TotalRecordCount \u0026gt;= 1) { // Create an ExecuteMultipleRequest object. ExecuteMultipleRequest request = new ExecuteMultipleRequest() { // Assign settings that define execution behavior: don\u0026#39;t continue on error, don\u0026#39;t return responses. Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false }, // Create an empty organization request collection. Requests = new OrganizationRequestCollection() }; //Create a collection of PublishDuplicateRuleRequests, and execute them in one batch foreach(Entity entity in rules.Entities) { PublishDuplicateRuleRequest publishReq = new PublishDuplicateRuleRequest { DuplicateRuleId = entity.Id }; request.Requests.Add(publishReq); } service.Execute(request); } else { tracing.Trace(\u0026#34;Plugin execution cancelled, as there are no duplicate detection rules to publish.\u0026#34;); return; } } private EntityCollection GetDuplicateDetectionRules(IOrganizationService service) { QueryExpression qe = new QueryExpression(\u0026#34;duplicaterule\u0026#34;); qe.ColumnSet = new ColumnSet(\u0026#34;duplicateruleid\u0026#34;); ConditionExpression condition1 = new ConditionExpression(); condition1.AttributeName = \u0026#34;statecode\u0026#34;; condition1.Operator = ConditionOperator.Equal; condition1.Values.Add(0); ConditionExpression condition2 = new ConditionExpression(); condition2.AttributeName = \u0026#34;statuscode\u0026#34;; condition2.Operator = ConditionOperator.Equal; condition2.Values.Add(0); FilterExpression filter = new FilterExpression(); filter.FilterOperator = LogicalOperator.And; filter.Conditions.Add(condition1); filter.Conditions.Add(condition2); qe.Criteria.AddFilter(filter); //Have to add this, otherwise the record count won\u0026#39;t be returned correctly qe.PageInfo.ReturnTotalRecordCount = true; return service.RetrieveMultiple(qe); } } } The only caveat with the above is that it is arguably only useful for if you are regularly importing Unmanaged, as opposed to Managed solutions, as the Publish All Customizations option is not displayed on the import wizard for unmanaged solutions. Nevertheless, by rolling out the above into your environment, you no longer need to scrabble around for the mental note you have to make when performing a solution update.\n","date":"2017-02-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/automatically-publish-duplicate-detection-rules-dynamics-crmdynamics-365-for-enterprise/","title":"Automatically Publish Duplicate Detection Rules (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"It is often the case, as part of any application or database system, that certain record types will be well-suited towards duplication. Whilst this is generally a big no-no for individual customer records or invoice details, for example, there are other situations where the ability to duplicate and slightly modify an existing record becomes incredibly desirable. This is then expanded further to the point where end-users are given the ability to perform such duplication themselves.\nA good example of this can be found within Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E). Email Templates are, in essence, a record type that is duplicated whenever a user selects the Template and creates a new Email record from within the application. Whilst there will always be details that need to be modified once the duplication is performed, having the ability to essentially \u0026ldquo;copy + paste\u0026rdquo; an existing record can generate the following benefits for a business:\nStreamlining and adherence to business processes Efficiency savings Brand consistency CRM/D365E does a pretty good job of making available a number of record types designed solely for this purpose, but a recent real-life example demonstrated a potential gap. A business I was working with was implementing the full sales process within the application - Lead to Opportunity to Quote etc. At the Quote stage, the businesses existing process would generally involve having a number of predefined \u0026ldquo;templates\u0026rdquo; for Quotes. This was due to the fact that the business would very regularly quote for the same kind of work, often with little or no variation. Out of the box with CRM/D365E, the sales team would have to create a new Quote record and add on every Quote Product line item each time a Quote was required - leading to little or no efficiency benefit of using the application.\nTo get around the issue, I was tasked with creating a means of setting up a number of \u0026ldquo;template\u0026rdquo; Quote records and then have the ability to quickly copy these template records, along with all of their associated Quote Product records, with some minor details changed in the process (for example, the Name value of the Quote). A workflow is immediately the best candidate for addressing the second requirement of this task but would require some additional development work to bring to fruition. I decided then to look, rather nervously, at creating a custom workflow assembly.\nWhy nervously? To be frank, although I have had plenty experience to date with writing plugins for CRM/D365E, I had not previously developed a custom workflow assembly. So I was a little concerned that the learning curve involved would be steep and take much longer than first anticipated. Fortunately, my fears were unfounded, and I was able to grasp the differences between a plugin and a custom workflow assembly very quickly:\nInstead of inheriting from the IPlugin interface, your class instead needs to be set to the CodeActivity interface. As with plugins and, depending on Visual Studio version, you can then use CTRL + . to implement your Execute method. Context (i.e. the information regarding the who, what and why of the execution; the User, the Entity and the action) is derived from the IWorkflowContext as opposed to the IPluginExecutionContext Input/Output Parameters are specified within your Execute method and can be given a label, a target entity and then information regarding the data type that will be passed in/out. For example, to specify an Input Parameter for a Quote EntityReference, with the label Quote Record to Copy, you can use the following snippet: [Input(\u0026#34;Quote Record to Copy\u0026#34;)] [ReferenceTarget(\u0026#34;quote\u0026#34;)] public InArgument\u0026lt;EntityReference\u0026gt; QuoteReference { get; set; } The rest is as you would expect when writing a C# plugin. It is good to know that the jump across from plugins to custom workflow assemblies is not too large, so I would encourage anyone to try writing one if they haven\u0026rsquo;t done so already.\nBack to the task at hand\u0026hellip;\nI implemented the appropriate logic within the custom workflow assembly to first create the Quote, using a Retrieve request to populate the quote variable with the Entity details and fields to copy over:\nEntity newQuote = quote; newQuote.Id = Guid.Empty; newQuote.Attributes.Remove(\u0026#34;quoteid\u0026#34;); newQuote.Attributes[\u0026#34;name\u0026#34;] = \u0026#34;Copy of \u0026#34; + newQuote.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;quotenumber\u0026#34;); Guid newQuoteID = service.Create(newQuote); The important thing to remember with this is that you must set the ID of the record to blank and then remove it from the newQuote - otherwise, your code will attempt to create the new record with the existing GUID of the copied record, resulting in an error.\nNext, I performed a RetrieveMultiple request based off a QueryExpression to return all Quote Product records related to the existing records. Once I had my results in my qp EntityCollection, I then implemented my logic as follows:\nforeach (Entity product in qp.Entities) { Entity newProduct = product; newProduct.Id = Guid.Empty; newProduct.Attributes.Remove(\u0026#34;quotedetailid\u0026#34;); newProduct.Attributes[\u0026#34;quoteid\u0026#34;] = new EntityReference(\u0026#34;quote\u0026#34;, newQuoteID); service.Create(newProduct); } After deploying to CRM and setting up the corresponding Workflow that referenced the assembly, I began testing. I noticed that the Workflow would occasionally fail on certain Quote records, with the following error message:\nThe plug-in execution failed because the operation has timed-out at the Sandbox Client.System.TimeoutException\nThe word Sandbox immediately made me think back to some of the key differences between CRM/D365E Online and On-Premise version, precisely the following detail pertaining to custom code deployed to Online versions of the application - it must always be deployed in Sandbox mode which, by default, only allows your code to process for 2 minutes maximum. If it exceeds this, the plugin/workflow will immediately fail and throw back the error message above. Upon closer investigation, the error was only being thrown for Quote records that had a lot of Quote Products assigned to them. I made the assumption that the reason why the workflow was taking longer than 2 minutes is because my code was performing a Create request into CRM for every Quote Product record and, as part of this, only proceeding to the next record once a success/failure response was returned from the application.\nThe challenge was therefore to find an alternative means of creating the Quote Product records without leading the Workflow to fail. After doing some research, I came across a useful MSDN article and code example that utilised the ExecuteMultipleRequest message:\nYou can use the ExecuteMultipleRequest message to support higher throughput bulk message passing scenarios in Microsoft Dynamics 365 (online \u0026amp; on-premises), particularly in the case of Microsoft Dynamics 365 (online) where Internet latency can be the largest limiting factor. ExecuteMultipleRequest accepts an input collection of message Requests, executes each of the message requests in the order they appear in the input collection, and optionally returns a collection of Responses containing each message\u0026rsquo;s response or the error that occurred.\nSource: https://msdn.microsoft.com/en-us/library/jj863631.aspx\nThrowing caution to the wind, I repurposed my code as follows, in this instance choosing not to return a response for each request:\n// Create an ExecuteMultipleRequest object. ExecuteMultipleRequest request = new ExecuteMultipleRequest() { // Assign settings that define execution behavior: continue on error, return responses. Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false }, // Create an empty organization request collection. Requests = new OrganizationRequestCollection() }; foreach (Entity product in qp.Entities) { Entity newProduct = product; newProduct.Id = Guid.Empty; newProduct.Attributes.Remove(\u0026#34;quotedetailid\u0026#34;); newProduct.Attributes[\u0026#34;quoteid\u0026#34;] = new EntityReference(\u0026#34;quote\u0026#34;, newQuoteID); CreateRequest cr = new CreateRequest { Target = newProduct }; request.Requests.Add(cr); } service.Execute(request); Thankfully, after re-testing, we no longer encountered the same errors on our particularly large Quote test records.\nAs a learning experience, the above has been very useful in showcasing how straightforward custom workflow assemblies are when coming from a primarily plugin development background. In addition, the above has also presented an alternative method for creating batch records within CRM/D365E, in a way that will not cause severe performance detriment. I was surprised, however, that there is no out of the box means of quickly copying existing records, thereby requiring an approach using code to resolve. Quotes are an excellent example of an Entity that could benefit from Template-isation in the near future, in order to expedite common order scenarios and help prevent carpel tunnel syndrome from CRM users the world over. 🙂\n","date":"2017-02-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/bulk-creating-dynamics-crm365-for-enterprise-records-in-c-create-request-vs-executemultiplerequest/","title":"Bulk Creating Dynamics CRM/365 for Enterprise Records in C#: Create Request vs. ExecuteMultipleRequest"},{"content":"Organisations that deploy Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E) can immediately take advantage of a number of inbuilt functionality, processes and data models that can be re-purposed with minimal effort. Whilst this approach can often lead to more streamlined deployment of your CRM/D365E solution, individuals customising the system should take care not to make the system fit around a business too much; rather, the opposite must be achieved where ever possible and careful analysis should be carried out in the outset to ensure that this balance is maintained. Sacrificing sensible business processes to accommodate for the quirks of a particular business system is a major pitfall that should be avoided as part of any major IT system deployment.\nA good example of this can be found in the pricing calculation engine within CRM/D365E, which is utilised by the following entities within the system:\nOpportunity Opportunity Product Quote Quote Product Order Order Product Invoice Invoice Product Rather than having to implement your own logic to generate prices for these entities, businesses can choose to utilise the pricing engine to automatically generate the net total for your Products, calculate appropriate Discounts and then figure out the final total at the end.\nFor those who are dissatisfied with how CRM performs this calculation, you will be pleased to hear that you have the option to override the default pricing engine and specify your own via a C# plugin. More information, and a very handy code example, can be found on our good friend MSDN:\nThe pricing engine in Microsoft Dynamics 365 supports a standard set of pricing and discounting methods, which might be limiting to your business depending on your specific requirements for applying taxation, discounts, and other pricing rules for your products. If you want to define custom pricing for your products in opportunities, quotes, orders and invoices, you can use the CalculatePrice message.\nTo use the custom pricing for your opportunities, quotes, orders, and invoices:\nSet the value of the Organization.OOBPriceCalculationEnabled attribute to 0 (false). You can also use the Sales tab in the system settings area in Microsoft Dynamics 365 or Microsoft Dynamics 365 for Outlook to disable system pricing. More information: Configure product catalog information Create a plug-in that contains your custom pricing code for calculating the price for your opportunity, quote, order, or invoice. Register the plug-in on the CalculatePrice message. Source: https://msdn.microsoft.com/en-us/library/dn817885.aspx\nI think the most key thing as part of the above is not to overlook the simplest step - namely, modifying the setting within CRM/D365E that lets you specify your custom pricing engine in the first place. If this is not set, then you may spend many hours trying to figure out why your beautifully developed plugin is not working! It can be found very straightforwardly in Administration area of the application, on the System Settings page:\nWhilst the code example provided by Microsoft gives you a good flavour of what you can potentially achieve with your own custom logic, I thought I would share two further examples that I recently was involved in developing, which may also prove useful when putting together your own custom pricing engine.\nCalculating Custom Fields/Attributes Arguably one of the biggest benefits of implementing your own custom pricing engine is being able to incorporate additional fields as part of the calculation. A recent real life example best demonstrates this. I was implementing a quoting solution for a business within Dynamics CRM 2015. The organisation was, fortunately, able to utilise much of the out of the box functionality within CRM as part of their existing processes. The only caveat was that they wanted the ability to add a Margin value at the Order level, in a similar vein to the Discount fields currently on the Quote entity - a Discount value and a Discount percentage value. The organisation wanted the option to do both, either or neither i.e. have the ability to specify a Margin value AND an additional percentage on top of that.\nAfter configuring the appropriate fields within CRM to store both a currency value for the Margin and a decimal value for the Margin (%), we then proceeded to write some custom code that would achieve this aim. A snippet of this can be found below, which takes an existing total value of all Products on a Quote and then applies the correct calculation. It is worth explaining that the system returns NULL values if there is no data in the field when using the GetAttributeValue method (a fact I was already well aware of), which is why we have to specifically set the variables with a default value of 0 and perform the NULL check:\ndecimal margin = 0; decimal marginPercent = 0; if (quote.GetAttributeValue\u0026lt;Money\u0026gt;(\u0026#34;new_mycustommarginamountfield\u0026#34;) != null) margin = quote.GetAttributeValue\u0026lt;Money\u0026gt;(\u0026#34;new_mycustommarginamountfield\u0026#34;).Value; if (quote.GetAttributeValue\u0026lt;Money\u0026gt;(\u0026#34;new_mycustommarginpercentagefield\u0026#34;)) marginPercent = quote.GetAttributeValue\u0026lt;decimal\u0026gt;(\u0026#34;new_mycustommarginpercentagefield\u0026#34;); //Calculate margin amount based on the total amount total = total + margin; quote[\u0026#34;totalamountlessfreight\u0026#34;] = new Money(total); service.Update(quote); //Calculate margin percentage based on the total amount decimal marginPercentVal = marginPercent / 100 * total; total = total + marginPercentVal; quote[\u0026#34;totalamountlessfreight\u0026#34;] = new Money(total); service.Update(quote); Calculating Sales Tax CRM/D365E makes the assumption that tax will always be calculated on the Product Detail level. That\u0026rsquo;s why the Quote Product, Opportunity Product, Order Product and Invoice Product entities have a Tax field, demonstrated below on the Quote Product form:\nThere are a few problems with this, however:\nYou cannot set a default Tax for each Product in the system. What this means is that you have to drill down to every Product details entity and populate the Tax manually. Whilst you could look at a Business Rule, Workflow or some custom code to get around this issue, this seems like a rather complicated solution to something that you would expect to be easy to configure. My experience indicates that most companies in the UK calculates tax on the gross amount of an order, and not at an individual Product level. Attempting to try and change a common practice to fit around a business system is a good example of what I spoke about in the introduction to this blog post. Generally, most organisations will work with a flat rate of Tax for all Products (unless they are dealing with other countries). With this in mind, it seems a little crazy having to set this on an individual basis. By using our own custom calculation logic, we can get around the above and implement a solution that best meets our need. For example, here is a code snippet that will take the total value of all Products on the Order entity and then calculate the VAT tax amount at 20%, saving the tax-only amount and the Net Total back to the system:\ndecimal vat = 0.20m; decimal total = 0; decimal tax - 0; for (int i = 0; i \u0026lt; ec.Entities.Count; i++) { total = total + ((decimal)ec.Entities[i][\u0026#34;quantity\u0026#34;] * ((Money)ec.Entities[i][\u0026#34;priceperunit\u0026#34;]).Value); (ec.Entities[i])[\u0026#34;extendedamount\u0026#34;] = new Money(((decimal)ec.Entities[i][\u0026#34;quantity\u0026#34;] * ((Money)ec.Entities[i][\u0026#34;priceperunit\u0026#34;]).Value)); service.Update(ec.Entities[i]); } //Calculate total from tax tax = total * vat; total = total + tax; order[\u0026#34;totaltax\u0026#34;] = new Money(tax); order[\u0026#34;totalamount\u0026#34;] = new Money(total); service.Update(quote); It would be disingenuous of me not to point out that the above solution has its own faults - the biggest one being that your code would require manually updating if the tax rate ever changes in the future. You can perhaps get around this issue by instead storing the current tax rate within a CRM entity, that can be updated in line with any future changes. Your plugin could then query this entity/attribute each time the plugin is executed.\nConclusions or Wot I Think Whilst the ability to override a key feature within CRM/D365E is incredibly welcome and a great example of how you can leverage the system without compromising on your existing business processes, it is arguable that this process is hardly straightforward. A passing knowledge of C# is mandatory to even begin to start implementing your own custom pricing engine, as well as good awareness of how CRM/D365E plugins work. This is all stuff that an administrator of the application may struggle to grasp, thereby requiring dedicated resource or knowledge within the business to implement the desired solution. It would be nice to perhaps see, as part of a future version of D365E, the ability to specify a custom pricing engine within the application itself - similar to how Business Rules were introduced and reduced the need for form-level JScript functions to achieve common tasks. Nevertheless, it has been good to discover that the CalculatePricing message is exposed within the application and that the application has the flexibility for end users to modify (and perhaps improve upon 🙂) one of its key features.\n","date":"2017-01-29T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/implementing-custom-calculations-for-sales-entities-dynamics-crmdynamics-365-for-enterprise/","title":"Implementing Custom Calculations for Sales Entities (Dynamics CRM/Dynamics 365 for Enterprise)"},{"content":"The single biggest challenge when developing a reporting solution is data. Invariably, you won\u0026rsquo;t always have one database that contains all the information you need; often, you will need to bring across disparate and unrelated data sources into a common model. This problem can be exasperated if your organisation has a number of application or database systems from different vendors. Finding a tool that can overcome some of these hurdles is a real challenge. For example, whilst I am a huge fan of SQL Server Reporting Services, the out of the box data connection options are generally limited to common vendor products or ODBC/OLE DB data sources. Case in point: Finding or even developing a Data Source that can support a JSON stream can be troublesome and complicated to configure. With the landscape as such, the reporting tool that can offer the most streamlined method of overcoming these challenges is the tool that is going to win the day.\nFollowing on from my deep-dive with the product last year, I have been working more and more with Power BI in recent weeks. What I like most about the tool is that a lot of the hassle is taken out of configuring your data sources. Power BI does this by leveraging the existing Power Query language and equipping itself with a large number of Data Source Connectors. The most surprising aspect of this? Microsoft products form only a subset of the options available, with connectors in place for many of competitor products from the likes SAP, SalesForce and Oracle. In my limited experience with the product to date, I have yet to find a data source that it does not support, either as part of a data connector or a manual Power Query.\nA recent work example can best illustrate the above, as well as showcasing some of the built-in functionality (and learning curves!) that come to working with data via Power Query and writing Data Analysis Expressions (DAXs). There was a requirement to generate an internal department dashboard for an IT service desk. The dashboard had to meet the following key requirements:\nDisplay a summary of each team members movements for the current week, including the location of each person on that current day. Each member of the team was already recording their weekly movements within their Exchange calendar as all day appointments, configuring the Subject field for each appointment accordingly. For example, In Office, Working at Home etc. No other all day appointments were booked in the calendars. Query Dynamics CRM and return data relating to Active/Inactive Case records. To be displayable on a TV/Screen, refresh automatically and be exportable as a .pdf document or similar. A CRM Dashboard can achieve about 50-60% of the above, but the key requirements of querying Exchange and exporting the dashboard are much more tricky; whilst it is certainly possible to perform web service requests from within CRM to Exchange, the process would be so convoluted to implement that is arguably not worth the effort. Likewise, CRM is not particularly friendly when it comes to printing out Dashboards, as you often left to the mercy of the web browser in question. With all this in mind, we decided that Power BI was the best option and proceeded to bring all the data together using PowerQuery.\nWe first used the out of the box Exchange connector to query each person\u0026rsquo;s mailbox for all Calendar items, performing two transformations on the data. First, we filtered the result to return Calendar items from the current week and, second, we added a column to identify which Calendar the record derives from (as there is no field on each record to determine this). We\u0026rsquo;ll see why this is required in a few moments:\nlet Source = Exchange.Contents(\u0026#34;john.smith@domain.com\u0026#34;), Calendar1 = Source{[Name=\u0026#34;Calendar\u0026#34;]}[Data], #\u0026#34;Filtered Rows\u0026#34; = Table.SelectRows(Calendar1, each Date.IsInCurrentWeek([Start])), #\u0026#34;Added Custom\u0026#34; = Table.AddColumn(#\u0026#34;Filtered Rows\u0026#34;, \u0026#34;Owner\u0026#34;, each \u0026#34;John Smith\u0026#34;) in #\u0026#34;Added Custom\u0026#34; Next, we combined all Calendars together into one table - again using Power Query. Table.Combine is a comma-separated list of all tables you want to merge together, so you can add/remove accordingly to suit your requirements. We also take this opportunity to remove unnecessary fields and convert our Start and End field values to their correct type:\nlet Source = Table.Combine({Calendar1, Calendar2, Calendar3}), #\u0026#34;Removed Columns\u0026#34; = Table.RemoveColumns(Source,{\u0026#34;Folder Path\u0026#34;, \u0026#34;Location\u0026#34;, \u0026#34;DisplayTo\u0026#34;, \u0026#34;DisplayCc\u0026#34;, \u0026#34;RequiredAttendees\u0026#34;, \u0026#34;OptionalAttendees\u0026#34;, \u0026#34;LegacyFreeBusyStatus\u0026#34;, \u0026#34;IsReminderSet\u0026#34;, \u0026#34;ReminderMinutesBeforeStart\u0026#34;, \u0026#34;Importance\u0026#34;, \u0026#34;Categories\u0026#34;, \u0026#34;HasAttachments\u0026#34;, \u0026#34;Attachments\u0026#34;, \u0026#34;Preview\u0026#34;, \u0026#34;Attributes\u0026#34;, \u0026#34;Body\u0026#34;}), #\u0026#34;Changed Type\u0026#34; = Table.TransformColumnTypes(#\u0026#34;Removed Columns\u0026#34;,{{\u0026#34;Start\u0026#34;, type date}, {\u0026#34;End\u0026#34;, type date}}) in #\u0026#34;Changed Type\u0026#34; Our CRM data is returned via an adapted version of the query used previously on the blog, taking into account the benefits of using a Saved Query as opposed to FetchXML. No further work is required to manipulate the data once in Power BI, so this won\u0026rsquo;t be covered any further. Our issue is now with the Exchange Calendars. Because the appointments in the Calendars to indicate each person\u0026rsquo;s movement are set as All Day appointments spanning multiple days, we have no way of extrapolating the days in between to determine whether it is the current day. So for example, if the all-day Appointment starts on Monday and ends on Wednesday, we have Monday and Wednesday\u0026rsquo;s date, but not Tuesday\u0026rsquo;s. We, therefore, need to find a solution that determines whether the appointment falls on a specific day of the week - Monday, Tuesday, Wednesday, Thursday or Friday.\nOur first step is to generate a date table covering the entire period we are concerned with. Using this very handy query, we can set up a Power BI function that will allow us to generate just that - in this case, for the whole of 2017:\nWhy do we need this? Because we need to determine for each date in 2017 what day it falls on. For this reason, we now take off our Power Query hat and jam on our DAX one instead 🙂 Close \u0026amp; Apply your queries in Power BI and then navigate to your new date table. Add a new Column, using the following DAX formula to populate it:\nDay of Week (Number) = WEEKDAY(\u0026#39;2017\u0026#39;[Date], 2) The WEEKDAY function is an incredibly handy function in this regard, enabling us to determine the day of the week for any date value. Nice! We can now go back to our \u0026ldquo;unified\u0026rdquo; calendar, and perform the following modifications to it:\nAdd on a column that returns a TRUE/FALSE value for each row on our calendar, which tells us if the Start, End or any date between these values falls on a specific day. So, for our IsMondayAllDay field, our DAX formula is below. This will need to be modified accordingly for each subsequent column, by incrementing 1 on the \u0026lsquo;2017\u0026rsquo;[Day of Week (Number)], 1 bit by 1 for Tuesday, 2 for Wednesday etc.: IsMondayAllDay = IF(AND(CONTAINS(CALENDAR([Start], IF([End] = [Start], [End], [End] - 1)), [Date], DATEVALUE(LOOKUPVALUE(\u0026#39;2017\u0026#39;[Date], \u0026#39;2017\u0026#39;[Week Number], FORMAT(WEEKNUM(AllCalendars[Start], 2), \u0026#34;General Number\u0026#34;), \u0026#39;2017\u0026#39;[Day of Week (Number)], 1))), AllCalendars[IsAllDayEvent] = TRUE()), \u0026#34;TRUE\u0026#34;, \u0026#34;FALSE\u0026#34;) A calculated column that tells us whether the current row is today, by referencing each of our fields created in the subsequent step. Similar to above, a TRUE/FALSE is returned for this: IsToday = IF(([IsMondayAllDay] = \u0026#34;TRUE\u0026#34; \u0026amp;\u0026amp; WEEKDAY(TODAY(), 2) = 1) || ([IsTuesdayAllDay] = \u0026#34;TRUE\u0026#34; \u0026amp;\u0026amp; WEEKDAY(TODAY(), 2) = 2) || ([IsWednesdayAllDay] = \u0026#34;TRUE\u0026#34; \u0026amp;\u0026amp; WEEKDAY(TODAY(), 2) = 3) || ([IsThursdayAllDay] = \u0026#34;TRUE\u0026#34; \u0026amp;\u0026amp; WEEKDAY(TODAY(), 2) = 4) || ([IsFridayAllDay] = \u0026#34;True\u0026#34; \u0026amp;\u0026amp; WEEKDAY(TODAY(), 2) = 5), \u0026#34;TRUE\u0026#34;, \u0026#34;FALSE\u0026#34;) We now have everything we need to configure a Measure that can be used on our Dashboard - the Subject of the calendar appointment and a way of indicating that the appointment is today. So our final DAX formula would be as follows for John Smith:\nJohn\u0026#39;s Location (Today) = LOOKUPVALUE(AllCalendars[Subject], AllCalendars[IsToday], \u0026#34;TRUE\u0026#34;, AllCalendars[Owner], \u0026#34;John Smith\u0026#34;) Now, it is worth noting, that the above solution is not fool-proof. For example, if a person has multiple All Day Appointments configured in their Calendar, then it is likely that the Measure used above will fall over. Giving that this is unlikely to happen in the above scenario, no proactive steps have been taken to mitigate this, although you can certainly implement a solution to address this (e.g. use the MAX function, only return records which contains \u0026ldquo;Working\u0026rdquo;, \u0026ldquo;Office\u0026rdquo; or \u0026ldquo;Home\u0026rdquo; in the Subject etc.). Nevertheless, I feel the above solution provided an effective \u0026ldquo;crash course\u0026rdquo; in a number of fundamental Power BI concepts, including:\nPower Query data retrieval and manipulation Power BI Functions DAX Formulas and the difference between Measures and Calculated Columns As a colleague recently said to me, \u0026ldquo;I think we will be using PowerBI a lot more in the future\u0026rdquo;. This is something that I would certainly agree with based on my experience with it so far. 😁\n","date":"2017-01-22T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/overcoming-date-range-hurdles-powerbidax/","title":"Overcoming Date Range Hurdles (Power BI/DAX)"},{"content":"One of the dangers when working as part of a specific role within a technology-focused occupation is that a full 360-degree knowledge of an application, and some its more subtle nuances, can often be lacking. This is particularly true for those who work with Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E). For example, most people can say with confidence how an entity is created but may be less familiar with the process of removing an entity and the list of things you need to watch out for as part of this process. This can be exacerbated if your role is involved primarily with \u0026ldquo;build a solution\u0026rdquo; projects, where often you will build out components and hand it over to a customer; it is unlikely that, after this point, you will have much involvement on how the solution shapes, evolves and also (arguably) some of the challenges that can be faced when working with the application on a daily basis. This is, I would argue, essential experience and an area that should be addressed if you feel it is lacking in some way.\nI encountered a strange issue recently when performing a tidy-up of an existing Dynamics CRM 2016 solution. The work involved \u0026ldquo;breaking up\u0026rdquo; the solution into more logical groupings, based on business functions. After getting these solution components archived into an unmanaged solution, we then proceeded to delete the components from the primary solution file, one by one.\nDeleting solution components can often be a laborious process, thanks to the applications Dependent Components feature, which prevents you from deleting in-use components from the application. Whilst no doubt a highly beneficial feature to have in place, it can prove to be difficult to unwrap in practice. Assuming your team/business is following sound principals relating to change management, as I have argued for previously on the blog, all CRM Administrators/Customisers should have some experience of doing this. For those who have yet to take the plunge, though, it\u0026rsquo;s important to remember the following:\nDependent Components include pretty much everything that you can find with a Solution file, so you won\u0026rsquo;t need to look further afield in order to track down any components. Relationships can be the trickiest component to remove dependencies for, as the Lookup field on the related entity will need to be removed from all forms and views first. Certain components may need to be deactivated first before they can be safely deleted. For example, Workflows and Business Process Flows. I definitely prefer CRM/D365E having this feature in place, but it can feel like a double-edged sword at times.\nGoing back to the task at hand, we were close to getting all the entities that needed deleting completed, but we encountered the following issue when deleting an entity:\nThe Component Type and Dependency Type fields were the only fields populated with information - Sdk Message Processing Step and Published respectively - so we were initially left stumped at what the issue could be. We did some digging around within CRM to see what the problem is, by first of all querying Advanced Find to return all of the Sdk Message Processing Steps records for the entity concerned. There were two records that caught our attention:\nBoth records do not have a Name field populated, just like the Dependent Components highlighted above, and also contain the following useful Description value - Real-time Workflow execution task. This would immediately suggest that the issue relates to a Workflow of some description. But we had already deactivated/deleted all workflows that reference the Entity or its Attributes.\nAfter some further research and a good night sleep, I came back to the issue and remembered some obscure information relating to Business Rules from MSDN and the how they are stored in CRM:\nThe following table describes relevant process trigger entity attributes.\nSchemaName Type Description ControlName String Name of the attribute that a change event is registered for. For other events this value is null. ControlType Picklist Type of the control to which this trigger is bound. The only valid value for this release is 1. This indicates that the control is an attribute. This value only applies when the ControlName is not null. Event String There are three valid values to indicate the event: load, change \u0026amp; save FormId Lookup ID of the form associated with the business rule. This value is null when the rule applies to all forms for the entity that supports business rules. IsCustomizable ManagedProperty Information that specifies whether this component can be customized. You cannot change process trigger records included in a managed solution when the IsCustomizable.Value is false. PrimaryEntityTypeCode EntityName Logical name for the entity that the business rule is applied on. ProcessId Lookup ID of the process. ProcessTriggerId Uniqueidentifier ID of the process trigger record. From the applications point of view, it appears that Business Rules are treated the same as the more typical Processes. This theory is backed up by the fact that Business Rules have to be explicitly Activated/Deactivated, just like a Workflow, Action or other types of process. After going back to the Entity to double-check, we confirmed that there were indeed two Active Business Rules configured; and, by deleting them and checking Dependent Components again, we were safely able to the delete the Entity.\nWhen attempting to reproduce this issue within a test environment, later on, I was able to clarify that the issue does not occur all of the time. From the looks of it, both of the Entities that we were attempting to delete the above had a relationship and the Business Rules in question were directly referencing the Lookup field. So, when reproducing the issue with a standard Business Rule configured (i.e. not referencing any lookup field), I was able to delete the entity successfully. So it is good to know that it is a rare issue and one that will not be commonplace whenever you need to delete an entity. Nevertheless, this issue demonstrates clearly the importance of familiarising yourself regularly with scenarios with CRM/D365E that you are not generally exposed to, within a testing environment or similar. Doing this will almost certainly throw up a few things that you can learn at the end of it and better equip yourself for any problems you may face in the future.\n","date":"2017-01-15T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/deleting-entity-with-blank-sdk-message-processing-step-dynamics-crm365-for-enterprise/","title":"Deleting Entity with Blank Sdk Message Processing Step (Dynamics CRM/365 for Enterprise)"},{"content":"I was recently showing a colleague how to use the rather excellent CRM REST Builder Managed Solution, in particular, its ability to generate code snippets for predefined query requests into Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E). During the demo, I noticed the following options under the Query Type drop-down with interest:\nI did some further digging on MSDN to confirm that my suspicions were correct, and I was pleased to be able to confirm the following:\nMicrosoft Dynamics 365 allows you to define, save, and execute two types of queries as listed here.\nQuery type Description Saved Query System-defined views for an entity. These views are stored in the savedquery EntityType. More information: Customize entity views User Query Advanced Find searches saved by users for an entity. These views are stored in the userquery EntityType. More information: UserQuery (saved view) entity Records for both of these types of entities contain the FetchXML definition for the data to return. You can query the respective entity type to retrieve the primary key value. With the primary key value, you can execute the query by passing the primary key value.\nSource: https://msdn.microsoft.com/en-gb/library/mt607533.aspx\nSo to clarify the above, there are 3 ways we can query CRM\u0026rsquo;s/D365\u0026rsquo;s Web Services with FetchXML based queries: either with a direct FetchXML query, by referencing a System View or by referencing a Personal View. The benefits of using a System/Personal view are significant, such as:\nBy having your Web API query setup as a view within CRM, you can utilise it within the application as part of a dashboard, entity view etc. You can reduce the size of your request and obfuscate information relating to your CRM instance (such as entity and attribute names) by using a saved query. Your FetchXML query can be stored within the application, meaning that you don\u0026rsquo;t need to worry about finding alternative means of backing up/storing your query. Knowing the above would have been quite useful during my recent PowerBI exploits involving the CRM/D365 Web API, so this is definitely something that I will be reviewing again in future. If you want to get started using Saved/User queries in the application yourself, there are a few things to decide on in the first instance and slight hurdles to overcome initially, depending on the nature of your FetchXML query.\nSo first things first, how do I create my user/system query in CRM?\nThis will depend on the complexity of the query you are attempting to execute. To demonstrate this, let\u0026rsquo;s take a look at FetchXML Query # 1:\n\u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;phonecall\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;subject\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;statecode\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;prioritycode\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;scheduledend\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdby\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;regardingobjectid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;activityid\u0026#34; /\u0026gt; \u0026lt;order attribute=\u0026#34;subject\u0026#34; descending=\u0026#34;false\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;and\u0026#34;\u0026gt; \u0026lt;condition attribute=\u0026#34;directioncode\u0026#34; operator=\u0026#34;eq\u0026#34; value=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; The above is a nice and straightforward query to return Phone Call records with a directioncode of \u0026ldquo;Incoming\u0026rdquo;, that can be built as an Advanced Find Personal View or System View very straightforwardly. But things change significantly when we take a look at FetchXML Query # 2 (an adapted query, provided courtesy of MSDN):\n\u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;lead\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;fullname\u0026#34; /\u0026gt; \u0026lt;link-entity name=\u0026#34;task\u0026#34; from=\u0026#34;regardingobjectid\u0026#34; to=\u0026#34;leadid\u0026#34; alias=\u0026#34;ab\u0026#34; link-type=\u0026#34;outer\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;and\u0026#34;\u0026gt; \u0026lt;condition entityname=\u0026#34;ab\u0026#34; attribute=\u0026#34;regardingobjectid\u0026#34; operator=\u0026#34;null\u0026#34; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; There is no way that we can specify an outer join query within the CRM interface; so the only way in which we can get this query saved back into CRM is by writing a bespoke C# app that will add it in for us. Here is a code example for a method achieving this:\nstatic void CreateSystemView(IOrganizationService service) { Guid viewID; string layoutXML = @\u0026#34;\u0026lt;grid name=\u0026#39;resultset\u0026#39; object=\u0026#39;4\u0026#39; jump=\u0026#39;fullname\u0026#39; select=\u0026#39;1\u0026#39; preview=\u0026#39;1\u0026#39; icon=\u0026#39;1\u0026#39;\u0026gt; \u0026lt;row name=\u0026#39;result\u0026#39; id=\u0026#39;leadid\u0026#39;\u0026gt; \u0026lt;cell name=\u0026#39;fullname\u0026#39; width=\u0026#39;150\u0026#39; /\u0026gt; \u0026lt;/row\u0026gt; \u0026lt;/grid\u0026gt;\u0026#34;; string fetchXML = @\u0026#34;\u0026lt;fetch version=\u0026#39;1.0\u0026#39; output-format=\u0026#39;xml-platform\u0026#39; mapping=\u0026#39;logical\u0026#39; distinct=\u0026#39;true\u0026#39;\u0026gt; \u0026lt;entity name=\u0026#39;lead\u0026#39;\u0026gt; \u0026lt;attribute name=\u0026#39;fullname\u0026#39; /\u0026gt; \u0026lt;link-entity name=\u0026#39;task\u0026#39; from=\u0026#39;regardingobjectid\u0026#39; to=\u0026#39;leadid\u0026#39; alias=\u0026#39;ab\u0026#39; link-type=\u0026#39;outer\u0026#39; /\u0026gt; \u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt; \u0026lt;condition entityname=\u0026#39;ab\u0026#39; attribute=\u0026#39;regardingobjectid\u0026#39; operator=\u0026#39;null\u0026#39; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt;\u0026#34;; Entity savedQuery = new Entity(\u0026#34;savedquery\u0026#34;); savedQuery[\u0026#34;name\u0026#34;] = \u0026#34;Complex View Test\u0026#34;; savedQuery[\u0026#34;description\u0026#34;] = \u0026#34;Test view to demonstrate how to create a view with a complex FetchXML query\u0026#34;; savedQuery[\u0026#34;returnedtypecode\u0026#34;] = \u0026#34;lead\u0026#34;; savedQuery[\u0026#34;fetchxml\u0026#34;] = fetchXML; savedQuery[\u0026#34;layoutxml\u0026#34;] = layoutXML; savedQuery[\u0026#34;querytype\u0026#34;] = 0; viewID = service.Create(savedQuery); Console.WriteLine(\u0026#34;Created system view \u0026#34; + viewID); } Two things to point out with the above:\nFor the layoutXML, be sure to modify the object value so that it is the correct value for the entity you are working with. Otherwise, although your view will be successfully created within the application, you will be unable to load it correctly from within the interface. You can find a list of all system Entity codes here. For custom Entity codes, you will need to use a tool like the Metadata Browser in the XRMToolBox to determine the correct value. The above code example is using late-bound classes to generate the appropriate data to create the view, contrary to the official sample code provided by Microsoft. I was a little bit unsure initially whether views could be created in the manner, so I was pleased when I was able to confirm the opposite 🙂 With your view created, what\u0026rsquo;s next?\nYou\u0026rsquo;ll need to obtain the database GUID for the view record in CRM. If you have created your view for the complex example above, then you can very easily grab this value by setting a breakpoint in your application in Visual Studio and accessing the viewID value. An alternative way is via the application:\nFor System Views, navigate to the View within the solutions page and open it up as if you were about to edit it. Maximise the window to full screen by pressing the F11 key. The URL of the page should now be visible if you move your mouse to the top of the screen, and available for copying. It should look something like this: http://mycrminstance/tools/vieweditor/viewManager.aspx?appSolutionId=%7bFD140AAF-4DF4-11DD-BD17-0019B9312238%7d\u0026amp;entityId=%7bDC6574CB-92CE-446C-A5D6-885A75107D52%7d\u0026amp;id=%7b6979F60B-D5D4-E611-80DC-00155D02DD0D%7d\nThe GUID of the view will be the last query parameter string, with the encoded curly braces values (%7b and %7d) removed. So, based on the above, the GUID is:\n6979F60B-D5D4-E611-80DC-00155D02DD0D\nPersonal Views are a little more tricky. The most straightforward way I can think of obtaining this is by going to a Users list of Active Saved Views, exporting the list to Excel via the Static Worksheet (Page Only) button and then grabbing the GUID from the hidden Cell A in Excel: This would obviously require you to have access to the Personal View, either via user login details or by having the user share the view to you. An alternative way to get this information would be via querying the Saved View entity via FetchXML/T-SQL.\nOnce you\u0026rsquo;ve got your GUID, you\u0026rsquo;re all set - you can now build your web service request in the language/format of your choosing. An example via XmlHttpRequest in JScript can be found below:\nvar req = new XMLHttpRequest(); req.open(\u0026#34;GET\u0026#34;, Xrm.Page.context.getClientUrl() + \u0026#34;/api/data/v8.1/leads?savedQuery=6979F60B-D5D4-E611-80DC-00155D02DD0D\u0026#34;, true); req.setRequestHeader(\u0026#34;OData-MaxVersion\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;OData-Version\u0026#34;, \u0026#34;4.0\u0026#34;); req.setRequestHeader(\u0026#34;Accept\u0026#34;, \u0026#34;application/json\u0026#34;); req.setRequestHeader(\u0026#34;Prefer\u0026#34;, \u0026#34;odata.include-annotations=\\\u0026#34;*\\\u0026#34;\u0026#34;); req.onreadystatechange = function() { if (this.readyState === 4) { req.onreadystatechange = null; if (this.status === 200) { var results = JSON.parse(this.response); } else { Xrm.Utility.alertDialog(this.statusText); } } }; req.send(); The actual request header should resemble the below:\nGET /JG/api/data/v8.1/leads?savedQuery=6979F60B-D5D4-E611-80DC-00155D02DD0D HTTP/1.1 OData-MaxVersion: 4.0 OData-Version: 4.0 Accept: application/json Prefer: odata.include-annotations=\u0026#34;*\u0026#34; Accept-Language: en-GB Accept-Encoding: gzip, deflate Connection: Keep-Alive Cookie: ReqClientId=9f48373a-aa68-462c-aab0-15ebd9311ce4; persistentNavTourCookie=HideNavTour; dea5e364-6f18-e611-80b5-00155d02dd0d_08f6129c-ce5b-4b98-8861-b15d01523fe1=/Date(1472324901665)/; excelDownloadToken=-1 Encapsulating your CRM/D365E queries as part of a System or Personal View is an effective way of reducing the size of your web service requests and simplifying the contents of the request whilst in transit. I would argue that a System View is a far better candidate for this job compared to a Personal View. Unless you have a specific business requirement not to have the view available to all users within the application, utilising this could save on lots of troubleshooting and administrative headroom down the line compared with a Personal View (such as if, for example, the person who has created the view originally leaves the business).\n","date":"2017-01-08T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/utilising-views-with-the-crmdynamics-365-for-enterprise-web-api/","title":"Utilising Views with the CRM/Dynamics 365 for Enterprise Web API"},{"content":"Often, when working with the latest features as part of Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E), it very much feels like handling a double-edged sword. Whilst the functionality they bring to the table is often impressive, there is typically scant information available when things go wrong - either through official channels or via online forum/blog posts. This can often act as a major barrier to early adoption, particularly when the business benefit of such adoption far outweighs any wasted time or extraordinary effort.\nOne of these new features is Word Templates, a feature that I have blogged about previously - particularly in how they compare against the more traditional SQL Server Reporting Services (SSRS) Reports that CRM/D365E offer developers. They present CRM customisers a much more familiar and equally powerful means of creating common templates that can be run against specific CRM record types, providing largely similar functionality compared to SSRS Reports. For those who have not been fortunate enough to have previous experience using SSRS, Word Templates present a far more accessible and friendly approach to addressing documentation needs within CRM/D365E.\nWhen configuring access to Word (and indeed Excel) Templates for your users, you generally only need to be concerned with two set of permissions - adequate level Read permissions against the entity that the template is being run against (Lead, Account etc.) and the Document Generation privilege, in the Business Management area on the Security Role window:\nWhen we were recently attempting to setup access to Word Templates for a specific group of users on CRM 2016 Online (8.1), we first verified that all of the above privileges had been granted - but we still encountered a strange issue when attempting to generate the Word Template:\nThis message should be familiar to those who work with the application frequently, as it generally is one of the error types that is generally thrown back when a database error has occurred - typically a timeout error or similar. What is distinctly not familiar about the above is the fact that we are unable to select the Download Log File button. Doing so would generally present us with a sufficiently detailed error message about the underlying problem. Without this, the issue becomes significantly more tricky to diagnose.\nBefore escalating the issue further with Microsoft. we were able to diagnose and observe the following:\nUsers that had been assigned the System Administrator security role were able to generate the template without issue Users within the root Business Unit, likewise, had no issue generating the template We were unable to replicate the issue within a separate environment, that had been configured the exact same way (same Business Units, Security Roles etc.) When running a Fiddler trace whilst reproducing the issue, nothing additional error message wise was exposed behind the scenes. Fiddler, for those who are unaware, is one of the best tools to have in your arsenal when diagnosing web service request issues in CRM. As the act of generating a Word Template would (presumably) cause a web service request, it was hoped that something additional clues could be gathered by using Fiddler. Due to the limitations of CRM 2016 Online compared with On-Premise CRM 2016 (i.e. we had no way in which we could interrogate the SQL database for the instance), we had to escalate the case to Microsoft in order to provide a resolution. And, as is generally the case in these matters, the proposed solution was informative and surprising in equal measure.\nThe support engineer assigned to the case took a copy of our instance database and deployed into a test environment, to see if the issue could be reproduced. They also have the benefit of being able to access information regarding the instance that I would imagine most CRM Online administrators would give their right hand for. Because of this, we were able to determine the underlying error database error message that was causing the problem:\nThe data type image cannot be used as an operand to the UNION, INTERSECT or EXCEPT operators because it is not comparable\nThe error is currently an acknowledged bug in CRM, which is targeted for resolution early in 2017. In the meantime, the engineer pointed me towards a tool that I was previously unaware of - the Dynamics CRM Organization Settings Editor. I was already aware that there are a number of settings relating to a CRM/D365E organisation that can be modified via PowerShell/cmd line executable for On-Premise deployments only. These settings can achieve a number of potentially desirable changes to your CRM Organisation, some of which cannot be achieved via the CRM interface alone - such as changing whether emails are sent synchronously or asynchronously (SendEmailSynchronously) or modifying the number of elements that are displayed in the tablet app, such as fields (TabletClientMaxFields), tabs (TabletClientMaxTabs) and lists (TabletClientMaxLists). In order to make these changes, there is a tool that you can download from Microsoft that enables you to change these settings via a cmd line window - but, arguably, the Dynamics CRM Organization Settings Editor is a far simpler tool to use, given that it is a managed solution that enables you to modify all of the settings from within CRM/D365E. Regardless of which tool you use, the solution suggested by the support engineer (which resolves the problem, incidentally) is outlined below, using the Dynamics CRM Organisation Settings Editor. It is worth noting that, although the KB article above does not specifically reference this, I was advised that using the tool is unsupported by Microsoft. Therefore, any changes - and issues that may be caused as a result - are made at your own risk:\nInstall the managed solution file into CRM and then open up the Solution. You will be greeted with the Configuration page, which should look similar to the below: Navigate to the EnableRetrieveMultipleOptimization setting, which should be set to the default of not set. Click Add to modify the setting. You will be asked to confirm the change before it will take effect: Once you click OK, the default value of 1 will be applied to this setting. Fortunately, this is the precise setting we need to get things working, so verify that this is indeed set to 1 on your instance and then close out of the solution: Now, when you refresh CRM for the affected user, you should be able to download the Word Template without issue.\nAlthough we can expect this error to be resolved shortly as part of the next update to D365E, hopefully, the above workaround will help others who come across the same issue and allow you to use Word Templates without issue within your CRM/D365E environment 🙂\n","date":"2017-01-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/sql-server-error-when-downloading-word-template-dynamics-crm-2016dynamics-365-for-enterprise/","title":"SQL Server Error When Downloading Word Template (Dynamics CRM 2016/Dynamics 365 for Enterprise)"},{"content":"As we come to an end of another year, the Christmas period provides an excellent opportunity to review what\u0026rsquo;s happened over the year and to plan effectively for what\u0026rsquo;s in store during the new year. My 2016 has been eventful, to say the least, presenting a good mixture of excitement, new challenges and learning experiences. 2017 looks to present a number of new opportunities for me to continue working with Dynamics CRM/Dynamics 365 for Enterprise (CRM/D365E), to continue developing the blog further with new content and for me to widen my knowledge more generally of Microsoft technologies in the cloud. Here\u0026rsquo;s my top 6 list of closing observations and thoughts for 2016:\nYou can\u0026rsquo;t learn everything about CRM/D365E, so don\u0026rsquo;t even bother trying I have really enjoyed working with CRM/D365E over the past couple of years. As I\u0026rsquo;ve discussed previously on the blog, it gives those who have \u0026ldquo;dabbled\u0026rdquo; in various different Microsoft technologies an open and accessible canvas to work with a number of different technologies simultaneously. As a result, the sheer breadth of potential specialisations within CRM was staggering; with this increased tenfold following the introduction of D365E. There is no point, therefore, in needlessly stressing yourself out, attempting to become a CRM/D365E master. This is particularly true given that the rate of changes to the product mean that it is literally impossible to keep up. Instead of being frustrated, you should take joy and satisfaction at whenever you learn something nuanced or niche about the product and focus your study towards either a technical (i.e. developer) or functional (i.e. Sales, Service etc. areas of the product) specialist. There is plenty for you to get your teeth stuck into either way.\n2016 has been the year for CRM/D365E There have been a lot of developments this year with the product formerly known as Dynamics CRM. This blog has covered the vast majority of these changes over the past year, such as:\nBrand new exams for Dynamics CRM 2016, including the splitting out of the Applications exam into a Sales and Service exam The release of the Dynamics CRM 2016 Spring Wave, which brought about the introduction of a number of new features, such as Portals The announcement of CRM\u0026rsquo;s migration into the new Dynamics 365 for Enterprise range of applications The introduction of a brand new version of the Developer Toolkit for Dynamics 365 in beta, the first update to this toolkit since Dynamics CRM 2013 A lot to take in, I\u0026rsquo;m sure you\u0026rsquo;ll agree! And this seems unlikely to let up in 2017 either. We can look forward to the following next year:\nThe release of Dynamics 365 for Business, Microsoft\u0026rsquo;s successor product for CRM targeted at small to medium size businesses. There is currently scant information available regarding this product, so I am keen to find out more about this at the first available opportunity. The (anticipated) release of brand new exams for Dynamics 365 An expected spring update to Dynamics 365 for Enterprise, which is likely to take the product to its next major version Now is certainly the best time to be involved in working with CRM/D365E, with the enhanced opportunity for those working with the product to develop their skills further and get involved as part of various projects.\nDon\u0026rsquo;t be afraid to make a big change\u0026hellip; Opportunities will always present themselves - whether it\u0026rsquo;s learning about a new technology, getting involved as part of a project at work or even moving from your current role to an entirely different one. These should always be explored eagerly, even if you are daunted by the change that they may make to your current circumstances.\n\u0026hellip;but also don\u0026rsquo;t be afraid to admit if this was a mistake Whenever we set out to make a major change - whether in our professional or personal life - we always have the best intentions on where we want to end up. This can often be borne out of a degree of frustration, which can be compounded further if this change does not deliver what we had hoped for. The important thing to remember with this is not to get angry but deal with the situation calmly, with control and take pressing action to get yourself in a place where you are happy. I am often reminded of one of Tony Robbin\u0026rsquo;s key messages - take massive action. Do whatever you feel is necessary to get you to the place where you want to be. And don\u0026rsquo;t be afraid to look backwards as part of this. Certainly, from a professional viewpoint, you should always be ensuring that you never burn you bridges with past colleagues/companies that you have worked with, judge any opportunity on its individual, unbiased merit and never be afraid to admit if you made the wrong decision. Instead, what you have done is had a significant learning experience that has helped to develop you further.\nYour team is the most important thing about your job A lot of studies point to the importance of a line manager in an employees happiness and engagement. Whilst this is undoubtedly important, I think this only tells half the story. What I have come to learn this year is the importance of having good and reliable team members surrounding you - including both colleagues who report to you and those who you report to. Having that almost instant rapport with other colleagues, in terms of how they approach daily tasks, sharing the same values as you and in covering your back, is the single most important thing about your role. If this is not present or not working within your current team, then you very much need to take massive action to get the team in the place where they need to be or start evaluating your options to extricate yourself away from the team. You should ask yourself the following questions when attempting to determine whether your team is functional and effective or the complete opposite:\nDo you colleagues communicate openly and honestly with you on a day to day basis? Does your line manager give you the freedom and latitude to complete your job in the best way you see fit? Is your line manager always approachable? When things go wrong, how do your colleagues work towards a resolution? Do they instantly start pointing the finger or do they get actively involved in providing a resolution? What did your colleagues do (if anything) to prevent things from going wrong in the first place? What is the team dynamic like? Is everyone in your team professional in how they conduct themselves? Does everyone have a clear vision of what the team is working towards, and how this fits in with the wider business? If the answer is no or nothing to most, if not all, of the above, then you should start to think carefully about how you can make a positive change to the working environment.\nLooking ahead to 2017: The blog At the start of the year, I set myself a personal challenge: start a CRM focused blog and publish at least 1 post a week. So far, I think I\u0026rsquo;ve done a pretty good job meeting this challenge, although I obviously cannot comment on the quality of the content\u0026hellip; 🙂 Whilst I am fully intending to continue posting in the New Year and the beyond, I am currently contemplating some changes to the blog. With the introduction of Dynamics 365, the blog name will soon become retro and outdated. So a new name will likely be required, and possibly a shift in focus towards a more general Microsoft technology focused blog. Watch this space for more info, but for those who have stuck around this year and are still reading, then thank you for your support! I hope I can continue to entertain you in some small fashion in 2017 as well.\nRegardless of whether you are a new or existing visitor to the site, I wish you a very Merry Christmas and a Happy New Year!\n","date":"2016-12-25T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/its-been-interesting-2016-in-review/","title":"It's been...interesting: 2016 in Review"},{"content":"Getting to grips with how to use Dynamics CRM/365 for Enterprise (D365E) is no easy feat. You can imagine just how difficult it is for an end user to get to grips with how the application works and functions; with more detailed knowledge around customisation and development being an entirely different ball game altogether. Compounding this problem further is the fact that the product has evolved at an increasingly more rapid pace over recent years, to the point where it is literally impossible to become a master of everything that you can do within CRM/D365E. Those venturing into the product for the first time may find their learning journey significantly simplified if they already have a good general knowledge about some of the underlying technology that powers CRM/D365E. This was certainly true in my case; I had a good background already in managing Office 365, writing SQL queries/reports and some experience with C#. This is all incredibly useful knowledge to have in your arsenal and is all directly applicable towards CRM/D365E in some way. For those who are getting to grips with the product for the first time, either without this previous experience or as part of an apprentice/graduate type role, your journey may not be as swift and issue-free. With this in mind, here\u0026rsquo;s my list of essential knowledge that you can add to your own \u0026ldquo;swiss army knife\u0026rdquo; of personal knowledge. Experience and good knowledge of these technologies will not only help you greatly in working with CRM/D365E, but present an excellent learning opportunity for Microsoft technologies more generally and something that you can add to your CV with pride:\nSQL Server What is it? : SQL Server is Microsoft\u0026rsquo;s proprietary database knowledge, based on the ANSI standard. SQL stands for Structured Query Language and is one the most widely used database programming languages on the planet.\nWhy Knowing It Is Useful: The underlying database technology that CRM/D365E uses is SQL Server, so having a general awareness of relational database systems work and how SQL Server differs from the standard goes a long way in understanding what is capable from a customisation/development viewpoint. For example, you can very quickly grasp which data types the application supports, as they will all ultimately be based on a supported SQL Server column data type. If you are running on-premise versions of CRM/D365E, then knowledge of SQL Server immediately moves from being a nice bonus to an essential requirement. This is because administrators will need to have good knowledge of how to manage their Dynamics CRM database instance, perform backups and also, potentially, write transact-SQL (T-SQL) queries against your database for reporting or diagnostic work.\nRecommended Area of Study: Focusing your attention towards SQL Server Reporting Services (SSRS) report writing will benefit you the most. Through this, you can begin to establish good knowledge of how SQL Server databases work generally, and be in a position to write FetchXML for Online/On-Premise deployments of the application or Transact-SQL (T-SQL) based reports for On-Premise only. Having a good awareness of what is capable via a standard SQL query will also hold your good stead when working with FetchXML, as you can immediately make a number of assumptions about what is possible with FetchXML (for example, filtering results using an IN block containing multiple values and performing grouping/aggregate actions on datasets)\nOffice 365 What is it? : Office 365 is Microsofts primary - and perhaps most popular - cloud offering for businesses, individuals or home users. Through a wide array of different subscription offers, home and business users can \u0026ldquo;pick \u0026rsquo;n\u0026rsquo; mix\u0026rdquo; the range of solutions they require - from Exchange-based email accounts to licenses for Microsoft Visio/Project, through to PowerBI.\nWhy Knowing It Is Useful: Although it is arguable that knowledge of Office 365 is not essential if you anticipate working with on-premises versions of the application, you may be doing yourself a disservice in the long term. Microsoft is increasingly incentivising organisations to move towards the equivalent cloud versions of their on-premise applications, meaning that as much knowledge as possible of how CRM/D365E Online works in the context of Office 365 is going to become increasingly more mandatory. If you are looking to secure a career change in the near future, and have not had much experience with Office 365, then this is definitely an area that you should focus on for future learning. From a day-to-day management point of view for the Online version of the product, some basic awareness of how to navigate around and use Office 365 is pretty much essential if you are going to succeed working with the product on a day-to-day basis.\nRecommended Area of Study: Spin up a D365E trial, and you can very quickly start getting to grips with how the product sits within the Office 365 \u0026ldquo;ecosystem\u0026rdquo;. Practice licensing users, configuring security group level access to your D365E trial tenant and modify the details on Office 365 user accounts to see how these details are synced through into D365E. The Microsoft Virtual Academy also has a number of general courses related to Office 365 however, due to the frequent updates, it may not always be in-line with the current version. The official curriculum/certification paths for Office 365 may also suffer the same from this but are worthwhile in demonstrating your experience and ability to integrate D365E with the various related Office 365 services.\nActive Directory What is it? : For the rookie, intermediate and experienced IT admins, Active Directory needs no introduction. It is essentially Microsoft\u0026rsquo;s implementation of the Lightweight Directory Access Protocol (LDAP), having first being introduced in Windows Server 2000, providing a means of managing user, security and access rights for domain objects. There are now two distinct versions of Active Directory that are available - the more traditional, Windows server based, on-premise Active Directory and Azure Active Directory, which is utilised primarily by Office 365.\nWhy Knowing It Is Useful: User account records for both On-Premise and Online versions of CRM/D365E use Active Directory objects, with a number of important information synchronised between an Active Directory user and the equivalent User entity record. For example, as indicated in this MSDN article, the only way in which you can synchronise a user\u0026rsquo;s Job Title through to CRM/D365E is by updating the equivalent field on the Azure Active Directory. Active Directory objects are also the only way in which you can authenticate with the application via the Web Interface or other means, with no option to create a database user or other kind of authenticated user type.\nRecommended Area of Study: It\u0026rsquo;s free to set up your own Azure Active Directory, so this is an excellent starting point for getting to grips with the technology. There\u0026rsquo;s also nothing preventing you from downloading a trial of Windows Server and installing the Active Directory server role on this machine. Once configured, you can then start to create users, update attributes, configure permissions and setup roles that contain collections of privileges. If you already have an Office 365 tenant with CRM/D365E Online, then you can use the Office 365 portal to manage your user accounts and test the synchronisation of attribute values from the Active Directory through to the application.\nPowerShell What is it? : A good way to remember what PowerShell is that it is essentially a blue command prompt window 🙂 . Traditionally only being relevant and important for those working extensively with Windows Server or Exchange, PowerShell is now increasingly important as part of administrating on-premise CRM/D365E, Office 365 and Azure, to name a few. Indeed, one of the major shock announcements this year was that PowerShell became open sourced and can be installed on Linux; representing the increasing demand and importance of Linux-based resources within the Microsoft cloud.\nWhy Knowing It is Useful: Similar to SQL Server, PowerShell is something that is instantly more applicable for on-premise CRM/D365E deployments. For example, the only way to modify the default number of Dashboard items is via executing the Get-CRMSetting cmdlet against your on-premise organisation. I would also, again, argue having a general awareness of PowerShell can help greatly when performing administration work against an Office 365 tenant that contains a CRM/D365E organisation, such as user provisioning or license assignment. If you are utilising the Azure Service Bus to integrate CRM/D365E for Azure-based applications, then PowerShell immediately becomes a desirable skill to have in your arsenal, allowing you to remotely administer, deploy or update Azure resources programmatically.\nRecommended Area of Study: The fact that PowerShell is now open sourced means that there is a plethora of online tools and guides to refer to, and you can be assured that you can get it working on your platform of choice. The GitHub page for PowerShell is a great place to get started. Beyond that, you have a few options about how you can practice further. If you have spun up a D365E trial, then you can choose to hook up PowerShell to Office 365 to see what you can do from a remote management perspective (such as granting Send On Behalf permissions for a shared mailbox). Alternatively, you can run it from your local Windows machine, connect it up to a Windows Server instance or attempt to create new services in Azure and experiment that way.\n","date":"2016-12-18T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/becoming-a-dynamics-crm365-for-enterprise-swiss-army-knife-essential-study-areas/","title":"Becoming a Dynamics CRM/365 for Enterprise Swiss Army Knife: Essential Study Areas"},{"content":"The recent releases of Dynamics CRM have very much been missing something for developers who work with the product. Whilst a number of exciting new features have been added to the product - the Web API, Interactive Service Hub and Portals, to name a few - there very much feels to be a lack of attention towards surrounding support tools to give developers a head start in progressing their careers and facilitating more agile and efficient development of bespoke CRM solutions. Exams are one area that has been guilty of this, with no up to date exam for developers released for over 3 years. In today\u0026rsquo;s modern world, 3 years is a lifetime of change and can leave developers in a situation where they are not aware of the latest technologies and potentially developing solutions that will soon be deprecated or lead to additional development time and administrative headroom to manage.\nWith this in mind, it is pleasing to hear that an updated version of the Visual Studio Developer Toolkit will be released for Dynamics 365 and that a beta version of this tool is now available to be downloaded. For those who have not used the earlier version of this tool for Dynamics 2013 and earlier, it provides developers the mechanism to create Visual Studio project templates that can store all of your related development assets, facilitate the deployment of these assets to your CRM instance from within Visual Studio and allow you to generate early-bound class files of your CRM entities and the requisite template class files for your Plugins; enabling you to focus more on developing your business logic. I have some experience using the tool myself in a sparing manner and have found it somewhat cumbersome to use. For example, I have had issues where I have had to sign into a development CRM system every time the project loads and I have not found it straightforward to use alongside existing CRM solutions. The tool is also only compatible with Visual Studio 2012 and earlier, which means developers running the latest version of Visual Studio will miss out on being able to use the toolkit. Nevertheless, the ability to generate the required code for your plugins in a quick manner is a major boon and, with the new release, a major opportunity is present in order to improve the tool and resolve some of its glaring issues. I\u0026rsquo;ve been taking a look at the Beta release of the Dynamics 365 Toolkit, and here are some of the new features which I am most excited about and feel will be a big help to developers in the months ahead:\nTemplates for Mobile App Development For most standard deployments, the out of the box Dynamics 365 for Tablet/Mobile app will be sufficient for your users. You could even look at developing a simple app using PowerApps. For more advanced scenarios, the SDK would be your next port of call in order to develop a mobile app that connects up with CRM/D365E. This is a popular approach to take when developing a mobile app with a specific function, as it enables you to leverage the full functionality of CRM/D365\u0026rsquo;s processes as part of your app, whilst ensuring that the data model conforms to a familiar, SQL Server-based model. The analogy I have used before with other developers is that developing for Dynamics CRM is like developing for SQL Server on steroids 🙂\nPreviously, there were some code examples included as part of the SDK in order to help developers \u0026ldquo;get started\u0026rdquo; with developing a Windows Store, iOS and/or Android app that connects to CRM. Now, the Dynamics 365 Toolkit includes a number of Project templates that help with developing a mobile, store and/or universal app:\nThe only caveat with these templates is that they are designed solely for Windows-based devices; for a truly cross-platform application, then you would need to look at utilising a tool like Xamarin in order to meet your requirements.\nConfiguring Paths to SDK/Plugin Registration Tool This is a minor new feature, but something that is important and, arguably, essential if you are frequently developing new CRM/D365E developer solutions. Within the settings page for the toolkit, you can now specify the location for your SDK DLL\u0026rsquo;s and Plugin Registration tool; which will then be used across all of the projects you create in Visual Studio moving forward:\nThis is a huge time-saving step and removes a rather tedious task that is often involved when it comes to setting up your Visual Studio projects.\nConfigure Microsoft Dynamics 365 Solution Wizard When you now create a new Dynamics 365 solution template in Visual Studio, you are greeted with a simple and clear wizard that helps facilitates the following scenarios:\nSpecification of a persistent connection to CRM/D365E, that can then be re-used across different projects within Visual Studio and provides a familiar UI experience with the existing tools within the SDK (such as the Plugin Registration Tool): The ability to select or create a brand new solution from within Visual Studio (previously, you only had the option of selecting an existing solution): Granular level control of which templates to include in your project - including Plugins, Custom Workflow Assemblies or Web Resources/Assets: With all these changes put together, the process of setting up new projects that are targeting a single CRM environment is greatly simplified and you can ensure that your project contains just the assets that you need.\nWhy the Developers Toolkit is important One of the major hurdles that developers generally face is having to deal with all of the stuff that is non-development related - setting up environments, installing development tools and resolving environment configuration issues, to name but a few. Whilst all of this is useful experience and gives developers a flavour for how a potential application deployment will look, it can often get in the way of a developer\u0026rsquo;s primary responsibility; to just code. This is one of the reasons why DevOps is becoming an increasingly more important area of focus for businesses who employ developers, as having an effective DevOps strategy, plan and resource will ensure greater productivity from your existing resource and help to foster an environment where projects are being run the \u0026ldquo;right\u0026rdquo; way. The Developers Toolkit is an important tool in your DevOps arsenal, as it works towards meeting all of the above objectives and begins to foster an approach where set standards are followed across multiple projects. It also helps take out the administrative effort often involved with, for example, setting up a Plugin class file manually within Visual Studio. Although the Dynamics 365 Developers Toolkit is still in beta, and not ready for Production use, I would very much hope to see a full release in the near future. Tools of this nature (such as XRMToolBox and the Ribbon Workbench) help to encourage greater efficiency, which is often essential for many IT projects these days.\n","date":"2016-12-11T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/whats-new-in-the-dynamics-365-developer-toolkit/","title":"What's New in the Dynamics 365 Developer Toolkit"},{"content":"I was rather surprised to see, when browsing through the software options available to MSDN subscribers, that a non-preview version of Windows Server 2016 was available to download. Further investigation proved that Windows Server had been quietly released in the middle of October, something which I made reference to as part of a previous post on Ignite 2016. For those who have found Windows Server 2012 to be a less than ideal experience, this will likely be welcome news. Some of the new features that are available as part of Windows Server 2016 include:\nThe return of the traditional Start menu: This is no doubt the most pleasing and welcome change to Windows Server, returning Windows Server to a more familiar user environment. Shielded Virtual Machines for Hyper-V: Windows Server 2016 offers additional encryption and security options, that allow you to protect business-critical or sensitive server instances in Hyper-V from being exposed across your network. General Virtual Machine Improvements: This release is very much geared towards widening the options available from a Hyper-V virtualisation point of view, including better performance, increasing accuracy for clock/date settings on virtual machines and \u0026ldquo;hot spare\u0026rdquo; type options for swapping out system resources on a Hyper-V image, without taking the virtual machine down. Identity Management Updates: A whole swathe of updates to many of the common Identity services offered by Microsoft, including Active Directory, Active Directory Federation Services and Active Directory Certification Services. Server administrators will need to start performing exercises to determine a possible upgrade path from older versions of Windows Server to newer versions. Whilst there is not significant rush at this stage to get this done, those who are running Windows Server 2008 instances will have additional impetus to consider upgrading in the near future.\nLet\u0026rsquo;s walk through the actual upgrade process, to determine just how simple or complicated the process is. In the examples below, I\u0026rsquo;ll be using a standard Windows Server 2012 R2 environment, that does not have any Server Roles installed.\nAfter running the Autoplay, the installation will begin, and you will be prompted first to check for updates before beginning the install:\nIt is always best to get the latest updates, in case any important security patches have been released, so I would recommend keeping the setting as default. If you are in a rush, though, you can opt to do this later, as the install time will be impacted severely. You can also indicate whether you want to send anonymous information to Microsoft about the install process.\nAfter checking for updates, the next major step is entering your Product Key:\nWhat I quite like about this, and what I believe is a new addition, is that you will be alerted in the message box below if you enter any invalid characters - quite handy, as it means you can correct any errors before entering the product key completely:\nNext, you have to decide what type of install you want to perform - a standard or Server Core installation:\nWhat\u0026rsquo;s worth noting is that you get this option, regardless of whether you are performing the upgrade on a standard or Server Core installation. So if you have determined in the mean time that the lack of user interface is causing you issues managing your server or if you find that your server is only ever accessed/administered via remote PowerShell, then you have a good opportunity to correct this during the upgrade process. Speaking from a purely CRM point of view, we\u0026rsquo;ve seen previously what is involved as part of setting up Dynamics CRM 2016 on a Server Core installation; suffice to say, it\u0026rsquo;s not something I would recommend.\nOnce you\u0026rsquo;ve made this crucial decision, you\u0026rsquo;ll then need to accept the standard license terms:\nThen, another important decision. Similar to when you upgrade Windows 7 to Windows 10, you have the option of performing a clean install or an in-place upgrade:\nIn this example, the Keep personal files and apps option was chosen, so I cannot advise on the impact of choosing Nothing. I would assume that choosing this would delete everything, including any installed Server Roles. Proceed with this step at your own risk.\nThen, you\u0026rsquo;ll get a rather lengthy Getting Updates window - I would assume that this is the point in the install when the updates are actually downloaded, and the earlier step was a pre-check of some sort:\nFinally, before beginning the install, you are prompted with an interesting warning, which you have to dismiss before the install will begin:\nI believe this is another first for a Windows Server install, in that Microsoft is specifically advising not to perform an in-place upgrade of your Windows Server. There is a certainly a convincing school of thought that backs this up - for example, what if you have a business critical application that suddenly breaks after the upgrade? It is arguably better and safer to setup a new server from scratch and then iteratively test out your server applications one by one, before deciding whether it is safe to upgrade. I would argue that an in-place upgrade is safe so long as any pre-requisite testing has been done using a copy of your server instance - indeed, I have not encountered any major issues since performing an in-place upgrade myself on my development server. But don\u0026rsquo;t ever throw caution to the wind and assume that you can upgrade without any testing.\nThe install will then do a final check on disk space; likely throwing an error if there is not enough (Microsoft recommend 32GB):\nThen, the critical moment! Review the settings that have been specified and then press Install to begin the process:\nThe full-screen setup will then start, preventing you from doing anything else on the Server throughout the process:\nThe Server will restart several times after that. The whole process took a couple of hours on my test Hyper-V machine, but this was probably due to lack of resources allocated to it! 🙂\nAfter install, you will be greeted with the new Windows Server 2016 login screen, which is not dissimilar from the Windows 10 one - with no clear indication that the upgrade has been completed successfully. I found this rather strange, and had hoped to get some kind of message afterwards to at least say \u0026ldquo;The upgrade has completed successfully!\u0026rdquo;. Regardless, the upgrade in my case appears to have been a success, and everything appears to be working fine so far on my test server.\nDynamics 365 and Windows Server 2016\nThis is a CRM/Dynamics 365 for Enterprise (D365E) focused blog, so it would be remiss of me not to segue way into this. 😉 The release of the on-premise version of D365E is imminent, and it is therefore not unlikely to assume that the required server versions will include Windows Server 2016. Although the pre-release TechNet articles do not currently confirm this fact, it would seem puzzling if Windows Server 2016 is not eventually included on the list of supported operating systems. Administrators can be comforted by the fact that is likely that Windows Server 2012 will remain on the supported operating system list for D365E for the next couple of versions at least. Those who are venturing into the world of on-premise CRM/D365E for the first time, however, may benefit themselves greatly to start familiarising themselves with Windows Server 2016.\nAs a final side note, I would be interested to hear if anyone has been able to install Dynamics CRM 2015/2016 on Windows Server 2016 and whether or not you are specifically prevented from doing so. Let me know your experiences in the comments below!\n","date":"2016-12-04T00:00:00Z","image":"/images/WindowsServer-FI.png","permalink":"/upgrading-to-windows-server-2016-from-windows-server-2012-r2/","title":"Upgrading to Windows Server 2016 from Windows Server 2012 R2"},{"content":"I have had an opportunity recently to start getting to grips with the wonderful world of PowerBI. For those who have walked the tightrope between Excel and SQL Server Reporting Service (SSRS) Reports, PowerBI appears to be the tool with these individuals in mind. It enables you to leverage existing Excel knowledge (through PowerQuery or Excel-based formulas/calculations), whilst also offering a number of easy to setup Visualisations, that are not too dissimilar to the charts, maps and other objects that can be setup on a .rdl report. What\u0026rsquo;s also good to know, from a Dynamics CRM/365 for Enterprise (D365E) point of view, is that you can very quickly hook up your CRM data to a PowerBI dashboard/report. In addition to this, integrating with other data sources - such as SQL, JSON or flat file sources - or even with completely different application systems is possible (even with SalesForce - shock, horror!). In the days of increasing need for tight integration between a dazzling array of different application and database systems, PowerBI gives you a broad framework to achieve any reporting goal you may have in mind. Having said that, it is still rough around the edges and in need of some further development before it, arguably, becomes the de facto tool to use. For example, I have found some of the formatting and design options available to be rather basic and light-years behind what is possible with SSRS or even Excel. There are also some features missing that are somewhat baffling, such as the ability to send out dashboards/reports via email on a predefined schedule. I would hope that we see increasing attention towards PowerBI in the months and years ahead in order to bring the very best of features from these more traditional applications but exposed in a non-daunting and wholly accessible way.\nAs referenced above, getting set up with your Online CRM/D365E data is incredibly straightforward via the inbuilt Dynamics 365 connector \u0026ldquo;wizard\u0026rdquo; - simply login into your online organisation, specify the entity data that you want to work with and PowerBi will push this data into a table for you, enabling you to start building your report in seconds. The connector \u0026ldquo;wizard\u0026rdquo; is suited to most typical data retrieval scenarios, providing a GUI interface to visualise the entity data within your CRM/D365E instance and the ability to put together related entities and return them as part of your query. Before you start using it, however, I would highlight the following:\nThe OData feed is rather indiscriminate in its retrieval - all records from an entity will be returned. Some pre-filtering will occur based on CRM\u0026rsquo;s/D365E\u0026rsquo;s security model (e.g. if the account you log in as has Business Unit level Read privilege on the Lead entity, only records in the accounts Business Unit will be returned), but typically it will be System Administrators who set up a PowerBI Dashboard; therefore meaning you have to deal with a lot of records being returned into PowerBI. Given that the two different PowerBI plans have limitations in regards to record retrieval, this could cause problems if you are working with large CRM datasets. Tied in with the above, because you have no way via the \u0026ldquo;wizard\u0026rdquo; interface to specify record retrievals, you cannot take advantage of filtering your data based on certain attributes or even take advantage of some of the native functionality within CRM to aggregate your data. Whilst PowerBI is certainly more than capable of doing this, relatively n00bish users may find this an unwelcome barrier that hinders adoption. Lookup and Option Set attributes are returned as a special data type of Record - with the underlying properties of the related record (GUID, display name etc.) stored within this sub-record. Having the data stored in this manner causes additional administrative effort on the PowerBI side, as you will need to figure out how to access the underlying properties of this PowerBi data type. Fortunately, if you are inclined towards building very specific and focused queries that you can execute against your Online CRM/D365E, there is a way - and the best thing is, we get to use something that has been recently introduced into CRM as well 🙂\nThe Web API to the rescue! The Web API was introduced in Dynamics CRM 2016, which implements version 4 of the Open Data (OData) Protocol, and will eventually replace the traditional means that developers would use to access CRM via web services (namely, the Organization service and the old OData service). CRM Developers will need to start becoming increasingly familiar with the Web API in the years ahead, and no doubt a lot of work will need to be done updating existing coding to look at the new Web API.\nBecause the Web API is a web service, PowerBi can connect to it via the Web connector. By querying the Web API, you have access to all of the messages that are exposed to get data from CRM - Retrieve, Retrieve Multiple and Predefined Query - with multiple options available to use in terms of how you return, filter and aggregate your data. Results will be returned in JSON format, so there will be some additional work that needs to be done to get the data into an accessible format. This post will now take a look at what you need to do in order to return data based on a FetchXML query, as well as (hopefully!) providing a useful code snippet that you can adapt to your environment.\nBefore starting, ensure that you have installed the CRM Rest Builder Managed Solution within your CRM development environment. This tool allows you to quickly generate code snippets in JScript that perform web service calls into CRM/D365E and is a massive help in a number of different ways. A big shout out and thanks to Jason Lattimer for the work he has done on this.\nTo begin, you need a FetchXML query that returns the data you need. This can be written manually or generated automatically via Advanced Find. In this example, we are going to use the following snippet that queries the Case entity, that will return 7 sample data records from CRM: \u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;incident\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;title\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;ticketnumber\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;incidentid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;caseorigincode\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;customerid\u0026#34; /\u0026gt; \u0026lt;order attribute=\u0026#34;ticketnumber\u0026#34; descending=\u0026#34;false\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;and\u0026#34;\u0026gt; \u0026lt;condition attribute=\u0026#34;prioritycode\u0026#34; operator=\u0026#34;eq\u0026#34; value=\u0026#34;1\u0026#34; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; Next, we need to generate the URL that will be used to query the Web API endpoint. There are two challenges here - the first being that the FetchXML query needs to be included in the URL and that it needs to be encoded so that it is a valid URL. The start of the URL is fairly straightforward to put together - it\u0026rsquo;s just your CRM organisation URL, in the following format: https://\u0026lt;Organisation Name\u0026gt;.\u0026lt;Region Code\u0026gt;.dynamics.com\nSo if your organisation name is crmchap and your CRM tenant is based in the EMEA region, then the URL would be as follows:\nhttps://crmchap.crm4.dynamics.com\nThe rest of the URL can be obtained from the CRM Rest Builder. Open the Managed Solution, navigating to the Configuration page. It will look something like this:\nUpdate the page so that the Action is set to Predefined Query. Ensure that the Primary Entity is set to Incident (always something you have to remember when working with the Case entity 😉) and then copy and paste the FetchXML query into the box below. The window should look similar to the below once ready:\nPress Create Request to put together the code snippet. On Line 2, you will see the following piece of code:\nreq.open(\u0026#34;GET\u0026#34;, Xrm.Page.context.getClientUrl() + \u0026#34;/api/data/v8.0/incidents?fetchXml=%3Cfetch%20version%3D%221.0%22%20output-format%3D%22xml-platform%22%20mapping%3D%22logical%22%20distinct%3D%22false%22%3E%3Centity%20name%3D%22incident%22%3E%3Cattribute%20name%3D%22title%22%20%2F%3E%3Cattribute%20name%3D%22ticketnumber%22%20%2F%3E%3Cattribute%20name%3D%22createdon%22%20%2F%3E%3Cattribute%20name%3D%22incidentid%22%20%2F%3E%3Cattribute%20name%3D%22caseorigincode%22%20%2F%3E%3Cattribute%20name%3D%22description%22%20%2F%3E%3Cattribute%20name%3D%22customerid%22%20%2F%3E%3Corder%20attribute%3D%22ticketnumber%22%20descending%3D%22false%22%20%2F%3E%3Cfilter%20type%3D%22and%22%3E%3Ccondition%20attribute%3D%22prioritycode%22%20operator%3D%22eq%22%20value%3D%221%22%20%2F%3E%3C%2Ffilter%3E%3C%2Fentity%3E%3C%2Ffetch%3E\u0026#34; The bit we are interested in is the string value after the Xrm.Page.context.getClientUrl() function, which will need to be appended to our CRM URL. So based on the above, our URL to use with PowerBI would be as follows:\nhttps://crmchap.crm4.dynamics.com/api/data/v8.0/incidents?fetchXml=%3Cfetch%20version%3D%221.0%22%20output-format%3D%22xml-platform%22%20mapping%3D%22logical%22%20distinct%3D%22false%22%3E%3Centity%20name%3D%22incident%22%3E%3Cattribute%20name%3D%22title%22%20%2F%3E%3Cattribute%20name%3D%22ticketnumber%22%20%2F%3E%3Cattribute%20name%3D%22createdon%22%20%2F%3E%3Cattribute%20name%3D%22incidentid%22%20%2F%3E%3Cattribute%20name%3D%22caseorigincode%22%20%2F%3E%3Cattribute%20name%3D%22description%22%20%2F%3E%3Cattribute%20name%3D%22customerid%22%20%2F%3E%3Corder%20attribute%3D%22ticketnumber%22%20descending%3D%22false%22%20%2F%3E%3Cfilter%20type%3D%22and%22%3E%3Ccondition%20attribute%3D%22prioritycode%22%20operator%3D%22eq%22%20value%3D%221%22%20%2F%3E%3C%2Ffilter%3E%3C%2Fentity%3E%3C%2Ffetch%3E\nA bit of a mouthful I agree!\nNow that we have the URL, we can connect up to CRM within PowerBI. Create or Open a new PBIX file and select Get Data -\u0026gt; Web: On the From Web window, copy and paste the URL we\u0026rsquo;ve built and press OK. You will be prompted to log into your CRM organisation; select Organisational account and log in as you would normally via the Office 365 login page. Once logged in, the data will be retrieved and the Query Editor will open, displaying something similar to the below: Some additional work is required in order to get our data into a standard, tabular format. In addition, the data at the moment is returning the underlying Option Set and Lookup values from the Incident entity, as opposed to the Display Name; not good from a reporting point of view: We, therefore, need to modify the underlying PowerQuery in order to achieve the following:\nInclude a line onto the Web Service request to return the formatted values for entity fields, where applicable, and only display these values in our results. Parse and convert the returned data into a line per CRM/D365E record. As an additional example, for presentation purposes, we also need to rename _customerid_value field to Customer Name Right click on Query1 and select Advanced Editor to open the underlying Power Query text. Delete everything here and then copy and paste the following into the window:\nlet //Get the CRM data, including the Formatted Values as part of the returned data Source = Json.Document(Web.Contents(\u0026#34;https://crmchap.crm4.dynamics.com/api/data/v8.0/incidents?fetchXml=%3Cfetch%20version%3D%221.0%22%20output-format%3D%22xml-platform%22%20mapping%3D%22logical%22%20distinct%3D%22false%22%3E%3Centity%20name%3D%22incident%22%3E%3Cattribute%20name%3D%22title%22%20%2F%3E%3Cattribute%20name%3D%22ticketnumber%22%20%2F%3E%3Cattribute%20name%3D%22createdon%22%20%2F%3E%3Cattribute%20name%3D%22incidentid%22%20%2F%3E%3Cattribute%20name%3D%22caseorigincode%22%20%2F%3E%3Cattribute%20name%3D%22customerid%22%20%2F%3E%3Corder%20attribute%3D%22ticketnumber%22%20descending%3D%22false%22%20%2F%3E%3Cfilter%20type%3D%22and%22%3E%3Ccondition%20attribute%3D%22prioritycode%22%20operator%3D%22eq%22%20value%3D%221%22%20%2F%3E%3C%2Ffilter%3E%3C%2Fentity%3E%3C%2Ffetch%3E\u0026#34;, [Headers=[#\u0026#34;Prefer\u0026#34;=\u0026#34;odata.include-annotations=\u0026#34;\u0026#34;OData.Community.Display.V1.FormattedValue\u0026#34;\u0026#34;\u0026#34;]])), //Get the underlying list of records returned values = Source[value], //Create a new table with one column, populated with the values list of records #\u0026#34;Table from List\u0026#34; = Table.FromList(values, Splitter.SplitByNothing(), null, null, ExtraValues.Error), //Query will error if no results, therefore use an if statement to build an empty table with the correct column headers Expand = if List.IsEmpty(values) then #table({\u0026#34;title\u0026#34;, \u0026#34;ticketnumber\u0026#34;, \u0026#34;createdon\u0026#34;, \u0026#34;incidentid\u0026#34;, \u0026#34;caseorigincode\u0026#34;, \u0026#34;_customerid_value@OData.Community.Display.V1.FormattedValue\u0026#34;}, {\u0026#34;title\u0026#34;, \u0026#34;ticketnumber\u0026#34;, \u0026#34;createdon\u0026#34;, \u0026#34;incidentid\u0026#34;, \u0026#34;caseorigincode\u0026#34;, \u0026#34;_customerid_value@OData.Community.Display.V1.FormattedValue\u0026#34;}) else Table.ExpandRecordColumn(#\u0026#34;Table from List\u0026#34;, \u0026#34;Column1\u0026#34;, {\u0026#34;title\u0026#34;, \u0026#34;ticketnumber\u0026#34;, \u0026#34;createdon\u0026#34;, \u0026#34;incidentid\u0026#34;, \u0026#34;caseorigincode\u0026#34;, \u0026#34;_customerid_value@OData.Community.Display.V1.FormattedValue\u0026#34;}, {\u0026#34;title\u0026#34;, \u0026#34;ticketnumber\u0026#34;, \u0026#34;createdon\u0026#34;, \u0026#34;incidentid\u0026#34;, \u0026#34;caseorigincode\u0026#34;, \u0026#34;_customerid_value@OData.Community.Display.V1.FormattedValue\u0026#34;}), //For some reason, if there are no results, then the empty table contains error records - so need to specifically remove them #\u0026#34;Removed Errors\u0026#34; = Table.RemoveRowsWithErrors(Expand), //Finally, rename the _customerid_value field #\u0026#34;Renamed Columns\u0026#34; = Table.RenameColumns(#\u0026#34;Removed Errors\u0026#34;,{{\u0026#34;_customerid_value@OData.Community.Display.V1.FormattedValue\u0026#34;, \u0026#34;Customer Name\u0026#34;}}) in #\u0026#34;Renamed Columns\u0026#34; The comments should hopefully explain what the code is doing, but to summarise: the PowerQuery is parsing the JSON data into a tabular format, using the returned data to build column headers, and then renaming the _customerid_value field to match our requirements. There is also a logic statement in there to check if we actually have any data; and if not, then build an empty table (since JSON does not return anything that we can use if 0 records are returned).\nWith our updated PowerQuery, our result set should look similar to the below:\nNice, presentable, filtered and exactly what we need to start building our PowerBI report! 😀\nConclusions or Wot I Think Whilst I\u0026rsquo;ll be the first to admit the above example is rather code-heavy and would require significant tinkering to suit specific scenarios, I would argue that the approach demonstrated adheres most closely to some of the common rules when it comes to querying data for reports:\nOnly return the columns you need Filter your data to reduce the number of results returned Ensure that your report-side columns are giving presentable database column values This approach is the only one adheres most closely to the above and, therefore, I feel that the extra effort is warranted; particularly when it means that those who are more familiar with CRM can stick to the tools they know best. As the default Dynamics 365 connector uses the Organization Data service, I would imagine that eventually this will be updated to use the new Web API instead. I hope that when this happens, we can begin to achieve all of the above via the PowerBI interface, as opposed to resorting to code.\n","date":"2016-11-27T00:00:00Z","image":"/images/PowerBI-FI.png","permalink":"/powerbi-deep-dive-using-the-web-api-to-query-dynamics-crm365-for-enterprise/","title":"Power BI Deep Dive: Using the Web API to Query Dynamics CRM/365 for Enterprise"},{"content":"Your business has decided it wants to implement Dynamics CRM/Dynamics 365 for Enterprise (D365E). That\u0026rsquo;s great news! Your next crucial decision will be what approach you take in implementing the system, in particular, who you involve as part of the planning, designing and building phases. Businesses that lack individuals with dedicated, technical know-how will often need to engage the services of a Microsoft Partner or consultant to assist with some or all of this. On the flip-side, if you have a capable and technically competent team that is able to get up and running with how to use the system quickly, then there is a case to be made for deploying CRM/D365E as an internally led project.\nIn this week\u0026rsquo;s blog post, I will evaluate the 3 potential routes open to you if you are looking to implement CRM/D365E within your business, looking at it purely from a small to medium size business\u0026rsquo; perspective. For enterprise-size deployments, then it really does make practical sense to go down the Partner route, as this will ensure you can leverage the maximum level of expertise, as well as take advantage of resources/price breaks that can be leveraged via this route.\nMicrosoft Partner It\u0026rsquo;s hard to argue some of the clear and immediate benefits of involving a Microsoft Partner as part of rolling out CRM/D365E. With access to resources like PartnerSource/the Dynamics Learning Portal and, depending on the partner\u0026rsquo;s relationship with Microsoft, there is a lot of expertise that a Partner can bring to the table. Most Partners live and breathe the product, with a lot of passion backing this up, so it is a great way to get individuals involved in your system deployment who know the system inside and out. No need to worry as well if your CRM/D365E \u0026ldquo;expert\u0026rdquo; is ill/away or if your consultant is unavailable, as you can pick up the phone and speak to anyone there and guarantee that there will be someone there to help. This is definitely very attractive if your business cannot afford the overhead of a dedicated resource and gives you peace of mind that your system will always have someone there who can make the changes you need.\nThe downside? None of this comes cheap, unfortunately. Depending on the size and nature of your deployment, you could be looking at anything in the region of £20,000 - £120,000 or upwards for a deployment or migration away from an existing system onto CRM/D365E. Most partners will have specific change control procedures in place to manage any alterations to the original scope of work. If you\u0026rsquo;re a small business or in a market where your internal processes are subject to change at a drop of a hat, then that CRM/D365E deployment that your originally thought would cost £50,000 has now almost doubled as a result. It can also be very difficult to find a partner that aligns with your vision and objectives for the deployment. You want to make sure that your new system is a great success and that everyone in the business is shouting off the rooftops about it, but is this something that your chosen partner ultimately cares about? Or is their focus more geared towards, for example, hitting their required revenue goal and/or deployed seats in order to maintain their CRM Partner Competency? You have no doubt done your due diligence on which CRM system you want for your business; this should also be applied to any Microsoft Partners who are in the frame for helping to deliver your CRM/D365E project. You should focus on their track record, customer testimonials and, ultimately, your gut feeling when meeting them for the first time.\nIndependent CRM Consultant/Developer You can potentially get a consultant on board that suits the level of expertise that you are looking for: if, for example, you anticipate using mostly out of the box elements of the system, then it may make practical financial sense to get on board a fairly junior level consultant; on the flip-side, if you anticipate needing to integrate with other systems within your business, then a consultant with development experience may be a better choice. You may also be able to secure more \u0026ldquo;face-to-face\u0026rdquo; time with an independent consultant or ensure that a set number of day(s) are spent within your business as part of the project, something which you may not always be able to guarantee going down the partner route.\nHaving just one consultant in charge of your entire deployment can prove to be incredibly risky for a business. For example, if the Consultant falls ills or is otherwise incapacitated, your entire project could be held up for long periods, costing the business money in the process. You may also find yourself in a situation where a Consultant is deliberately attempting to stretch out the time and length of the deployment, for selfish reasons. Businesses will also need to think carefully about their plans once a consultant/developer has finished the project. Who will be responsible for managing the CRM/D365E system? Who will carry out end-user training? These and other questions are ones you would need to have solid answers for if you\u0026rsquo;re are going down the independent consultant/developer route.\nInternal Project Speaking from experience, if you already work heavily with a number of \u0026ldquo;Microsoft stack\u0026rdquo; technologies within your business (SQL Server, C# etc.), then you should have very little difficulty in getting to grasp with CRM/D365E; which makes the internal project route a potentially attractive option. If you do get stuck at all, then CRM/D365E training courses are plentiful, and you could find yourself paying as little as 10-20% of what a fixed cost deployment from a Microsoft Partner comes back at, for what would be training for 2-3 individuals on the Microsoft exam curriculum (including the developer track). This assumes, of course, that your business has individuals within it who already have experience that you can potentially use as part of a systems rollout. These individuals may already, in addition, be well placed to understanding what the business needs and may already have a lot of passion around delivering solutions and service that meets and exceeds the expectation of colleagues. Who is better, therefore, to potentially spearhead and implement CRM/D365E within your business? There is also the risk that you potentially involve people within your project who may not fully understand or get how your business operates. Let\u0026rsquo;s face it, at the end of the day, you know your business better than anyone. You can avoid a situation where you are having to do additional work internally just to create process maps, documents etc. just so that someone else coming into your business cold can understand just what is that you do. And if there is any misunderstanding or confusion with this, then your project could get set back considerably and waste additional resource as a result.\nGoing down the internal route does present some major challenges and difficulties, which need to be mitigated from the outset in order to ensure the project is delivered on budget and on time. Getting a project manager involved from the outset (assuming your business has such a person available) is one way in which you can prevent against this, but be prepared for lots of frustrating moments. Getting to grips with learning CRM/D365E, even for those who have a good knowledge of the underlying technology, can be a major curve and you will need to be prepared to fail more than you succeed at first. You also need to carefully review and ensure that you have a fallback in case things go catastrophically wrong - if you go down the Online route, then this is provided for to an extent via Microsoft Support; but if you are adopting an on-premise version of CRM/D365E, then such support will be non-existent unless you involve a Microsoft partner.\nConclusions Each business is different, so you can never make a number of arbitrary assumptions in respect to business size, turnover etc. when advising on the best route to go when rolling out a system like CRM/D365E. So, to summarise, my thoughts on this are:\nIf your business does not have any existing, internal IT technical expertise and a very fixed/focused line of business, then the Partner route is definitely the way to go\nDitto above, but if your business is subject to fluctuations, then a blended approach of getting a consultant on board for a fixed period and end-user training in how to use and customise CRM/D365E means that you get that immediate technical expertise, but ensure that you are not reliant long-term on unnecessary overheads\nIf you a small to medium business with already a lot of technical expertise, a tight budget, flexible deadlines and a business model that is subject to sudden changes, then the self-taught/internal project route is ideal and guaranteed to ensure there is sufficient passion and drive within your organisation to make your deployment a success.\nDitto above, but if you have a deadline and your business is highly specialised, then getting a consultant and/or Partner involved can really speed things along. Just ensure that you have a fixed budget in place and beware of scope creep or massive changes in requirements.\nI hope that my analysis has been fair and balanced, and proves useful if you are currently contemplating how to go about your CRM/D365E deployment. It would be great to hear other people\u0026rsquo;s views and thoughts on the best approach in the comments below.\n","date":"2016-11-20T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/deciding-the-best-option-for-your-smb-crmdynamics-365-deployment-microsoft-partner-external-consultant-or-internal-project/","title":"Deciding the Best Option for your SMB CRM/Dynamics 365 Deployment: Microsoft Partner, External Consultant or Internal Project?"},{"content":"An oft-requested requirement as part of any Dynamics CRM/Dynamics 365 for Enterprise (D365E) deployment is a level of integration with another application system. In some of these cases, this will involve pulling through external web pages and passing them form-level attribute values, to load an external systems report, record page etc. From a CRM/D365E point of view, this can be very straightforwardly achieved thanks to some of the functionality provided as part of the Xrm.Page object model. For example. let\u0026rsquo;s assume that you have an IFrame control on your form and you wanted to load an ASP.NET web page, passing the ID of the record as a query parameter in the URL. Setup your IFrame on your form, with a random URL and set to hidden. Then, a JScript function like this on the OnLoad event would get the job done for you:\nfunction loadIFrame() { //Get the current record ID var entityID = Xrm.Page.data.entity.getId(); //Replace { \u0026amp; } with their appropriate URL counterparts in entityID entityID = entityID.replace(\u0026#34;{\u0026#34;, \u0026#34;%7b\u0026#34;); entityID = entityID.replace(\u0026#34;}\u0026#34;, \u0026#34;%7d\u0026#34;); //Create the URL var url = \u0026#34;http://myexternalwebpage.com/MyAspPage.aspx?id=\u0026#34; + entityID; //Then, update the IFrame with the new URL and make it visible on the form Xrm.Page.getControl(\u0026#34;IFRAME_myiframe\u0026#34;).setSrc(url); Xrm.Page.getControl(\u0026#34;IFRAME_myiframe\u0026#34;).setVisible(true); } What helps with the above is that there are well-documented code samples that assists when putting together this example, so you can be confident that the solution will work and is fully supported.\nThings get a little more complicated once we are operating outside the standard CRM/D365E environment. Assume that instead of displaying this IFrame control on a form, it needs to be displayed as part of an Entity Form in Adxstudio/CRM Portals. Here is where the head scratching can commence in earnest, and you need to look at getting your hand\u0026rsquo;s dirty writing custom code to achieve your requirements. There a few hurdles to overcome in the first instance:\nHow do you access attribute values from an Entity Form, such as a record ID? Once you are able to access the attribute value, how to you set this up on your Entity Form? How do you embed an IFrame within an Entity Form? Let\u0026rsquo;s take a look at one approach to the above, working on the same basis as above - an external URL that we pass the record ID to, from within an Entity Form Web Page. Things may get a bit more difficult if you need to access other entity attribute values, which may require some kind of trickery with Liquid Templates to achieve successfully.\nAccessing Entity Form Record ID When your Entity Form page is loaded on your Portal, there are a number of properties regarding the record that are exposed on the underlying web page - the name of the entity, the record ID, Status and Status Reason values. These can be accessed via a div element on the page, which can be viewed within the DOM Explorer as part of a Web Browsers developer tools (in the below example, Internet Explorer is used):\nThe id of the div class will always be the same, except for the value in the middle, which is the GUID for the Entity Form record within CRM/D365E, but without the dashes. So you don\u0026rsquo;t need to necessarily go into the DOM to get this value; as a time-saving mechanism, simply export your Entity Form record into Excel and view the first hidden column to obtain this value.\nSuffice to say, because we know that this value is accessible when our Portal page loads, we can look at programmatically accessing this via a JScript function. The following snippet will do the trick:\nvar recordID = document.getElementById(\u0026#39;EntityFormControl_31c41a020771e61180e83863bb350f28_EntityFormView_EntityID\u0026#39;).value; Now that we have a means of accessing the attribute value, our options in terms of what we can do with it greatly increase 🙂\nExecuting Entity Form Custom JScript Functions There are two ways you can place custom JScript on your portal page - you can either place your functions within the Custom JavaScript field, located on the Entity Form form within CRM:\nFunctions will be added to the bottom of your Web Page when loaded, meaning they can be freely accessed after the page has loaded. The second way, which leads us nicely onto the next section, is to wrap your JScript function as a custom HTML snippet on the Web Pages Copy (HTML) field.\nEmbedding an IFrame on your Web Page All Web Pages in Adxstudio/Portals - irrespective of what other content the page is loading - contain a Copy (HTML) field. This enables you to write your own bespoke text or other HTML content that is displayed on the Web Page. In the case of an Entity Form Web Page, then the content will be displayed just below the Entity Form. Thanks to the ability to access and write our own custom HTML code for this, options for bespoke development are greatly increased - simply click the Source button to switch to the underlying HTML editor:\nThen, using a combination of the snippet we used earlier and utilising the HTML tag, we can place the following in our Copy (HTML) to do the lot for us - get our record ID, pass it to an external web page and then load this within an IFrame:\n\u0026lt;p\u0026gt; \u0026lt;script\u0026gt; function getEntityID() { var url = \u0026#34;http://myexternalwebpage.com/MyAspPage.aspx?id=\u0026#34;; var entityID = document.getElementById(\u0026#39;EntityFormControl_31c41a020771e61180e83863bb350f28_EntityFormView_EntityID\u0026#39;).value; var iframeSrc = document.getElementById(\u0026#39;myiframe\u0026#39;).src; if (iframeSrc != url + \u0026#34;%7b\u0026#34; + entityID + \u0026#34;%7d\u0026#34;) { setTimeout(function () { document.getElementById(\u0026#39;myiframe\u0026#39;).src = url + \u0026#34;%7b\u0026#34; + entityID + \u0026#34;%7d\u0026#34;; }, 2000); } } \u0026lt;/script\u0026gt; \u0026lt;/p\u0026gt; \u0026lt;h1\u0026gt;My IFrame\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt; \u0026lt;iframe width=\u0026#34;725\u0026#34; height=\u0026#34;250\u0026#34; id=\u0026#34;myiframe\u0026#34; src=\u0026#34;\u0026#34; onload=\u0026#34;getEntityID();\u0026#34;\u0026gt;\u0026lt;/iframe\u0026gt; \u0026lt;/p\u0026gt; The reason why setTimeout is used is to ensure that the entity form class loads correctly, as this is one of the last things that Adxstudio/Portals loads last on the page. For obvious reasons, if this hasn\u0026rsquo;t loaded, then our JScript function will error. Putting this aside, however, the above solution gets us to where we want to be and means that we can achieve the same outcome as the CRM/D365E example demonstrated at the start of this post 🙂\nConclusions or Wot I Think Adxstudio/Portals presents some interesting and different learning opportunities, both given its genesis as a separate product to its gradual integration as part of the CRM/D365E family. This can often mean that you have to abandon your base assumptions and ways of thinking when it comes to CRM/D365E development, and instead look at things from a more general approach. I would hope that, in time, we will begin to see the gradual introduction of common XRM object models within CRM Portals, as it is crucially important that there is a unified approach when developing Portal extensions in the future and that we are not in the situation where unsupported code becomes rampant across different Portal deployments. This latter concern would be my chief worry with the examples provided in this post, as there is currently no clear way of determining whether the approach taken is supported or considered \u0026ldquo;best practice\u0026rdquo; from an Adxstudio/Portal perspective. I would be interested in hearing from anyone in the comments below if they have any thoughts or alternative approaches that they would recommend to achieve the above requirement.\n","date":"2016-11-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/working-with-iframes-and-crmd365e-attribute-values-from-adxstudio-and-portals/","title":"Working with IFrames and CRM/D365E Attribute Values from Adxstudio and Portals"},{"content":"Last week has been a particularly busy one for all things Microsoft concerned. Future Decoded 2016 came and went, with a whole range of interesting announcements and presentations from those within the industry. And, of course, we saw the release of Dynamics 365 for Enterprise on Tuesday, an event which I discussed more closely as part of last week\u0026rsquo;s blog post. There\u0026rsquo;s a lot about Dynamics 365 which is going to take some time to fully understand, and also a lot of features which are not yet fully available. I have been hearing rumours that there will be a December update, targeted at existing CRM Online users who wish to make the jump across, that will unwrap a couple of features that are currently disabled within Dynamics 365. Chief among these looks to be the App Builder and also what looks like some kind of sitemap editor tool! :O In the meantime, there a lot of changes that need digesting, and I wanted to focus this week on a particular group of new features/changes relating to Processes. For the uninitiated, processes is a broad term to describe a group of different Dynamics 365 for Enterprise (D365E) components that can be created within the system- namely, Workflows, Dialogs, Actions, Business Process Flows and Business Rules. Whilst not all of these have received attention as part of the new release, the ones that have definitely look to be in a much more enhanced and polished position compared to Dynamics CRM 2016 Update 1. Let\u0026rsquo;s take a look at some of these changes in more detail:\nProcess Designer The introduction of the Process Designer for Business Rules, Business Process Flows and Task Flows is perhaps the most noticeable and welcome enhancement to processes. Now, instead of linearly attempting to visualise how your processes operate, you can very quickly grasp how they look via the visual designer. What\u0026rsquo;s more, you have the ability to drag and drop your components quickly and easily, re-ordering them accordingly and seeing clearly how each step flows into the other:\nWhat\u0026rsquo;s also welcoming about this is that, for those who prefer a more \u0026ldquo;old school\u0026rdquo; approach to seeing how a Business Rule is structured, the Text View window provides a means of accommodating this. A nice and welcome touch!\nBusiness Rule Recommendations I really like this next new feature 🙂 When it comes to data inputting, one of the major challenges you face is ensuring that the data is entered correctly. These problems can be compounded in situations where data needs to conform to specific rules - for example, if Field A and Field B equal \u0026lsquo;ABC\u0026rsquo;, then set Field C to \u0026lsquo;DEF\u0026rsquo;; otherwise, set to \u0026lsquo;GHI\u0026rsquo;. The common solution to this problem in the past is to look at programmatic means of ensuring data is entered correctly - generally via a Business Rule or a form-level JScript function. Whilst these generally are effective at addressing the problem, they could be prone to errors and can be seen as being too absolute a solution, that doesn\u0026rsquo;t address what could be an underlying problem within a business; namely, a lack of understanding of business processes and how things should be done.\nThe new Recommendation Action as part of Business Rules would appear to seek towards addressing this gap, by providing an unobtrusive way of alerting users that there is a problem with the data they have entered onto a form and giving them an opportunity to correct their mistake. In the process of doing this, you can provide contextual information that explains why the data needs to be changed - increasing transparency and understanding from end users of the application. Setting them up is very straight-forward - just setup your Condition and then you can drag and drop the new Recommendation component onto the designer screen. To see how this works in practice, let\u0026rsquo;s look at an example on the Contact form - we want to ensure that if the Spouse/Partner Name field contains a value, that the user is prompted to update the Marital Status field accordingly. First, we build our condition as we would normally in a Business Rule:\nNext, we hook up our Recommendation component and configure it accordingly - setting the Recommendation properties and then our Set Field Value action\nYour Business Rule should look like the below when ready:\nWhen activated and then, upon navigation to the Contact form, we can see it in action; when the Spouse/Partner Name field is changed, a blue exclamation mark appears next to the Marital Status field. Once clicked, we get some guidance information and the ability to update the field:\nEt voila! The introduction of this new feature is a welcome surprise on my part. One limitation currently is that only one type of Action is supported with a Recommendation - Set Field Value. Here\u0026rsquo;s hoping that this is expanded as part of a future version of D365E to include additional Actions.\nValidation for Business Rules Previously, when building a Business Rule, the only way you could effectively determine that there were logic problems in your Business Rules is by activating them and testing them yourself on the form level. This is a potentially torturous process that could very well result in errors seeping through unintentionally. Now, as part of the new visual designer, the logic can be validated at any point and is also validated upon save. If unsuccessful, you are alerted to this fact and given some guidance on what is wrong so you can look at fixing the issue:\nHopefully, having this built in now will help avoid some of the most obvious mistakes that can sometimes seep through when building a Business Rule.\nSnapshot Often, when you are attempting to document a system, you will want to include some kind of pictorial representation of the system - a process map, diagram or something similar. Now, with the updates made to Business Rules and Business Process Flows, you have the ability to obtain screenshot of the designer window - all through the click of the Snapshot button:\nPressing this will download a .png image of the Process, that you can very easily include as part of existing documentation relating to your system:\nThis is a very handy new feature that will no doubt save a lot of time in the future! One thing to remember is that, if you wish to include all Components underneath the Details section, then you will need to expand it first before pressing the Snapshot button.\nTask Flows Having come out of Preview from Dynamics CRM 2016 Update 1, Tasks Flows are still very much in their infancy and something which I am still trying to get my head around fully. My understanding is that they are essentially a combination of Workflows and Dialogs, but designed solely for the Dynamics CRM/365 for Tablets Mobile Application. From the mobile app, they are accessible from the new Summary area that appears when the app first launches (and where the heavily touted Relationship Assistant will reside once released). Taking the example After Meeting Task Flow, we can see how this looks in the mobile app. First, we launch the app and navigate to Summary area:\nFrom there, we can see along the bottom of the tab all available Task Flows that can be launched based on the clipboard task list icon:\nClicking this then launches a New Appointment record screen and then the first Page of the Task Flow which, once completed, will then execute the custom logic in the background:\nTask Flows look very versatile and powerful, but I think will require some closer testing and experience before I\u0026rsquo;m able to comment further\u0026hellip; 😖\nAll in all, Processes seem to have received a lot of love and attention as part of the initial Dynamics 365 for Enterprise release. Microsoft has set the bar at the right level in creating the process designer, and the hope is that this is eventually rolled out for Workflows and other processes as well. Workflows, in particular, can benefit greatly from having a more visually accessible appearance, especially given the complexity that these can have when used to their fullest potential.\n","date":"2016-11-06T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/a-better-process-changes-to-processes-in-dynamics-365-enterprise/","title":"A Better Process? Changes to Processes in Dynamics 365 Enterprise"},{"content":"November 1st, 2016 is looming closer and closer, and anyone who is working with Dynamics CRM should be aware of the importance of this date. Dynamics 365 Enterprise will be officially released on this day, replacing all existing Dynamics CRM Online pricing/licensing offers for new customers. Existing Dynamics CRM customers \u0026amp; partners are starting to get a clear vision of what the product offering looks like, from a licensing and pricing structure. I am eagerly looking forward to getting my hands of a trial instance of Dynamics 365 so that I can take it for a whirl. But for now, I wanted to publish a post that takes a look at the most interesting aspects of Dynamics 365 Enterprise, its release and my general thoughts on what we can hope to expect in the months ahead:\nTiered Pricing The new tiered pricing structure of Dynamics 365 presents one of the major areas where Microsoft can challenge their competitors in the marketplace, as well as driving high volume license sales for their Dynamics 365 Enterprise plans. How it basically works is that the more licenses you consume for a particular plan, the cheaper each license in that plan will become. The following image from this really interesting article from ZDNet provides an excellent summary of how this will work:\nThose who currently subscribe to a high number of Basic, Essential \u0026amp; Professional licenses for CRM Online will, therefore, benefit greatly from moving across to Dynamics 365 as soon as possible, in order to take advantage of the very high levels of price reduction - in particular, for Team Member and Enterprise Plan 1 licenses.\nTeam Members Under Dynamics 365, the previous \u0026ldquo;light-use\u0026rdquo; Essential \u0026amp; Basic licenses have been replaced with the new Team Members license, that provides a standard set of user rights across the entire range of Dynamics 365 apps. They come in at about £10 less per month compared to the current £18.70 for Basic Licenses, potentially going down as low as approx. £3, thanks to tiered pricing. In terms of what they provide from a user access point of view, functionality appears to sum up as Essential + Basic = Team Members, covering typical record access requirements for most users in an organisation.\nFree Portal and Non-Production instances! Previously, you would have to purchase at least 25 Professional CRM licenses to get a Sandbox (i.e. Non-Production) instance of CRM for free, or alternatively, cough up £93.50 per month for a Sandbox. Portals, introduced earlier this year, have also been a paid add-on until now, for a significantly higher price of £311.60 per month!! With Dynamics 365 Plan 1 subscriptions and higher, your subscription will automatically include the following alongside your Production instance:\n1 Sandbox Instance 1 Portal Instance Given that there is no minimum seat requirement for Enterprise 1 plans, the above could represent a significant saving on average, particularly when you take into account tiered pricing. It also presents a major opportunity to drive increased adoption towards CRM Portals in the months and years ahead.\nMore database storage It is pleasing to see the minimum database storage rise to 10GB as opposed to 5GB. One of the (potentially) hidden problems over time as part of any CRM deployment is storage being slowly eaten away by entity record types. I have blogged previously about one of these entities in question, and it is something that customisers and administrators need to be acutely aware of when designing and planning the system. The increase in storage goes some way towards mitigating this, but I would question whether a further increase could be warranted; particularly given the cost of storage on Azure for SQL databases being so much cheaper in comparison.\nAnd it\u0026rsquo;s goodnight from MDM\u0026hellip;and Parature Perhaps the most significant announcement as part of the above is that Microsoft Dynamics Marketing (MDM) and Parature will no longer be sold to new customers from November 1st, 2016 onwards. Microsoft has already announced that Adobe Marketing Cloud will become Dynamics 365 for Enterprise\u0026rsquo;s preferred marketing solution, but this has been confused further by an additional follow-up announcement regarding the Dynamics 365 Marketing App for Business, coming up Spring 2017. For Parature, no successor product to has been announced, indicating that existing Parature users will eventually need to migrate across to some of the recently acquired service-focused modules within Dynamics 365, such as Customer Service, Field Service and Project Service Automation. I am unsure of the exact, specific numbers when it comes to Parature and MDM sales, but the above demonstrates clearly that not all Microsoft acquisitions are destined for success and products that are perceived to be \u0026ldquo;too different\u0026rdquo; from the core CRM/Dynamics 365 experience can and will be dropped. I cannot speak for Parature, but I have had some experience with MDM in the past and, although it does provide some useful and effective campaign automation tools, seems to be too bloated as a product, desperately trying to do everything but not in a particularly effective way. Microsoft\u0026rsquo;s mixed messaging in regards to what can be considered MDM\u0026rsquo;s true successor product means that it is prudent to perhaps wait before upgrading or moving away from MDM immediately. Hopefully, by Spring 2017, we will be able to see how both offers compare from an integration point of view with Dynamics 365 Enterprise.\nGenerous upgrade pathways for existing CRM customers Up to 47% discounts when upgrading to Dynamics 365 from Dynamics CRM Online. Microsoft has already published a list of promo codes that can be used for early adopters, so if you are itching to move across to Dynamics 365 next week, you can very quickly get upgraded.\nIs Dynamics 365 Enterprise actually a \u0026ldquo;major\u0026rdquo; release? Looking carefully through the following TechnNet article on how to access the new Dynamics 365 apps, and I noticed the following tidbit:\nWhat is \u0026ldquo;Dynamics 365 - custom\u0026rdquo;?\n\u0026ldquo;Dynamics 365 - custom\u0026rdquo; is the app name for all online organizations with a version 8.1 and lower as well as the default app on 8.2. The name for the 8.2 default app can be changed by the administrator.\nMy reading of this is that the version number of Dynamics 365 Enterprise is 8.2, as opposed to 9.0. This is a minor thing, but interesting that Microsoft does not consider the Dynamics 365 Enterprise release to be a \u0026ldquo;major\u0026rdquo; one. This potentially raises the prospect for a further release in 2017 that adds in a plethora of new features - something that ties in well with the expected release of the Dynamics 365 for Business in Spring 2017.\nConclusions or Wot I Think The Dynamics 365 release looks to be a major reset of a number of base assumptions surrounding Dynamics CRM - including, most crucially, the price. Some of the very early scenarios I have seen from a migration point of view look to point to a very definite price rise for those moving across to Dynamics 365 (assuming you follow Microsoft\u0026rsquo;s recommended migration pathway). This is mitigated somewhat if you have a high number of licenses, thanks to tiered pricing, but I am troubled about where this leaves small to medium size businesses who currently use CRM Online. I have highlighted previously my worries and concerns that Dynamics 365 for Enterprise could be seen as an adoption barrier for these type of businesses, so the reaction to these businesses to the new pricing will be an important bellwether for Dynamics 365 Enterprise - and whether businesses decide to just ditch it altogether when it comes to the eventual, forcible upgrade to the new plans; or look at moving across to Dynamics 365 for Business instead. The sooner we get some clarity on what this offering looks like, the better.\nSomething else to add into the mix, solely for UK-based customers, is the announcement that Microsoft\u0026rsquo;s cloud services prices will rise significantly in the new year, in a move that has been linked to the current state of Pound Sterling following the Brexit vote. To my knowledge, exact pricing for UK customers of the new Dynamics 365 plans have not been released (although we can do a rough currency conversion from US Dollars), so we are unable to exactly determine at this stage what the prices will look like at launch and whether they take into account the above price rises. If not, then it would add a degree of urgency towards migrating across to Dynamics 365 sooner rather than later, in order to lock in your prices for another 12 months.\nAll said and done, Dynamics 365 presents some interesting opportunities and challenges for organisations who work with the product - lets hope that it\u0026rsquo;s weighed more towards the latter in the months ahead 🙂\n","date":"2016-10-30T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dissecting-dynamics-365-enterprise/","title":"Dissecting Dynamics 365 Enterprise"},{"content":"Change control and management are important considerations for any IT system - and CRM is no different in this respect. Business processes are often subject to change at the drop of a hat, and organisations should ensure that they have robust, effective and, most of all, efficient processes in place for change management. Often these changes may preclude the removal of a particular aspect of a system - in CRM\u0026rsquo;s case, this could be a Form, a View or even an Entity field. You may often just decide to take the \u0026ldquo;easy\u0026rdquo; way out and not remove these components at all, choosing instead to hide or obscure them. Whilst this is fine in the immediate to short term, you are storing up problems long-term if you do not have a robust process in place to ensure these unnecessary or legacy components are eventually removed. The problem is, though, depending on your customisation deployment method - either as part of managed or unmanaged solution - your options in this regard could be hampered. It is, therefore, important to be aware of what the potential limitations are to both types of solutions so that you can structure your customisations in the most appropriate way. In this week\u0026rsquo;s blog post, we will take a closer look at some of the for and against arguments when it comes to removing solution components from your production CRM system, the behaviour of solution files and how they can assist and even impede a change management process.\nSo why should you ensure that unnecessary/legacy components are removed from your Production system? And is there a case to not remove them at all?\nComponents can potentially take up unnecessary space within your solution, leading to delays in performing solution updates. If you also have entity fields in your CRM that are no longer in use but are still storing data, then this could have adverse effects on your database storage levels - an important and essential consideration for CRM Online deployments in particular. Clarity and simplicity are hallmarks of a well managed and maintained system. Being in a situation where you have components in your system, that could easily be interpreted as being still in use or active, could lead to hours, if not days, of confusion and wasted time. Clearly, there are practical, if not somewhat idealistic, arguments in favour of the above. So what are the arguments against?\nRemoving a component almost straightaway could present problems if, for example, it turns out that the reasons for its removal were mistaken. You would potentially create more work for yourself in having to re-create a particular customisation when keeping it could have saved considerable effort and time. The above can be compounded further if it turns out that crucial business information was stored in, for example, a field that is deleted. Keeping the field intact can ensure that these potential situations are avoided. Let\u0026rsquo;s see now things look in practice when we attempt to emulate a change process within CRM\nFor testing purposes, we have created a custom entity - Test Solution Entity - which contains 2 custom Forms, Views and Fields:\nThis entity has been moved into an unmanaged solution, which will be exported as unmanaged and managed and then deployed into a separate CRM environment. We will then observe what happens when we push out an update to the solution, that has had certain components removed - in this case, the following components:\nTest Form 1 Test View 1 Test Date Field Unmanaged\nUpdating an unmanaged solution will do nothing to existing components that have been deleted - even if you specifically delete them from the solution in your development system. Therefore, as a best practice, any component that you choose to specifically delete from your unmanaged solution will need to be noted down and included as part of your release notes for your solution update. Once the update has been completed, you will then need to go into your production system and proceed with removing these components. Regardless of the type of customisation, you should encounter no problems deleting them - in most cases, all required dependencies for the component will have been removed as part of your solution update and the components themselves will be in an unmanaged state, meaning you are unrestricted in what you can do with them.\nManaged\nYou may assume that a Managed solution update would differ from an unmanaged in behaviour. In fact, for this example at least, when we import our updated managed solution, then the components we deleted are persisted in the solution. What\u0026rsquo;s worse, because these components are in a managed state, the steps involved in removing them may be complicated significantly. Fortunately, for Forms and Views, there is a Managed Property that can be configured to allow us to delete a Form/View if it is in a managed state:\nThese settings always default to True, so you do not need to specifically remember to set them.\nThere is some bad news, however - there is no such setting available for entity fields:\nIn addition, the following customisation components can not be set to Can be Deleted whilst in a managed state:\nEntity Relationships Business Rules Global Option Sets Web Resources Processes (Workflows, Dialogs etc.) Reports Connection Roles Templates (Article, Email etc.) Security Roles This can present a problem over time if we assume that over the lifecycle of your solution, you need to remove redundant fields - these will be maintained and will only be removed if you choose to completely uninstall and re-install your solution. Depending on the nature of your solution, this could cause the following problems:\nUninstalling the solution will delete all entity records from the system, as all components in the solution will be deleted. There could be significant time and effort involved in the re-installation process - most likely this will need to be done out of hours, given that everything within your solution will not be available for the duration of the re-install. If you have other unmanaged/managed solutions that have dependencies on components within your managed solution, then this could cause issues with these customisations; and could even mean that you have to re-install these solutions as well. Conclusions or Wot I Think\nThe above examples have looked at situations purely where you are making bespoke customisations to CRM. Sometimes deleting components may not be possible at all, particularly if you are using \u0026ldquo;out of the box\u0026rdquo; (OOB) components included with CRM. In these cases, you have no choice but to obfuscate these parts of the CRM systems as part of the customisations you make to the system - either through a well-defined security structure or by simply not exposing such elements of the system via the sitemap, for example. Putting OOB components aside, if we are to look at purely bespoke customisation elements of a CRM deployment, I would argue that striking a balance between the two extremes - leaving components that are no longer required in your system compared with deleting them at the first available opportunity - is often the best way to proceed. For example, having an internal process in place that ensures removal requests are signed off by a senior member of the business or by having a \u0026ldquo;grace period\u0026rdquo;, where customisations are flagged for deletion but not actioned until a set amount of time has expired. Tied up as part of this, you can very straightforwardly perform backups of your CRM data via the Export to Excel feature. Given how easy and accessible this feature is, there really is no excuse not to perform backups of fields that you intend to delete; then, if the worse happens and you need to restore customisations at a later date (which, on a separate note, should be straightforward so long as you keeping regular backups of your CRM solution files!), you can very quickly restore the data within these fields.\nNo matter which approach you take, I would argue that it is ultimately preferable to ensure that your CRM solutions are kept in a tidy, current and clear state at all times. You are doing a huge disservice to your current and future colleagues within your business by not following this mantra. In the process as well, you take a rather cavalier approach to what I would hope would be(!) one of your businesses most important assets.\n","date":"2016-10-23T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/managing-changes-in-your-crm-production-environment-managed-vs-unmanaged-solutions/","title":"Managing Changes in your CRM Production Environment: Managed vs. Unmanaged Solutions"},{"content":"Did you know that the CRM SDK contains a WebsiteCopy tool, that can be used to backup/migrate your CRM portal website? I was surprised actually, as there is no page on the CRM Setup \u0026amp; Administration website that refers to it. When I initially started looking more closely at the CRM portals release as part of the Spring Wave update, one of my first questions was \u0026ldquo;OK, portals are great! But how can I deploy my development portal site out to a production system when it\u0026rsquo;s ready?\u0026rdquo;. As a best practice approach, you will always want to ensure that you have distinctly separate development and production environments for any system that your business is using, and portals are no different in this regard. At first, I was concerned that there did not appear to be any \u0026ldquo;supported\u0026rdquo; mechanism for migrating development portal content into production environments; utilising the WebsiteCopy tool helps to overcome this issue and saves you from having to manually re-create records within your production CRM environment. Let\u0026rsquo;s take a closer look at how to get the tool, use it in practice and also review some of the supported scenarios that it can potentially assist with.\nHow to obtain the WebsiteCopy tool In a rather counter-intuitive step, you will need to download the Dynamics CRM 2015 SDK. This is because the tool is not available whatsoever within the 2016 SDK. This seems like a rather strange oversight, so I would expect this to eventually be addressed as part of a future SDK release. Indeed, even the official MSDN page for this tool is listed as being only applicable to CRM 2011, 2013 \u0026amp; 2015. The 2015 SDK can be downloaded from here. Once you\u0026rsquo;ve got it and extracted it successfully, the tool can be found in the Bin folder in the root directory:\nIt is also worth pointing out that this tool is an exact copy of the Website Copy Tool provided by Adxstudio, which is obtainable via an installation of Adxstudio portals. Users of this tool should, therefore, face no challenge using the SDK version of the tool.\nUsing the tool Due to simplistic nature of the wizard tool (and the fact that there are already well-documented walkthroughs available for both import and export scenarios), I will not go into detail documenting the entire process from start to finish. However, it is worthwhile pointing out the following:\nThere appears to be a bug on the Connect to Server screen where, after specifying your credentials and hitting Enter on your keyboard, the application thinks that you are pressing the \u0026lsquo;Go\u0026rsquo; button as opposed to \u0026lsquo;Next\u0026rsquo;; clearing the credentials you have entered in the process and essentially going back a step: It took me a good half an hour or so before I figured this out, so make sure you hit the correct button!\nWhen prompted to provide a Name value when importing your portal, the value can be anything you want it to be - the Website record\u0026rsquo;s Name field will be populated with this value in CRM on import. I would generally recommend exporting to XML first, and then running the wizard again to import your newly created XML That way, you can double-check to ensure that you have run the initial export correctly and obtain a backup of your entire website in the process. Exporting the cmd script at the end of the wizard is recommended if you intend to run the same export/import process frequently. For example, if you are running daily backups of your in-development website, then running the script instead of the wizard each day can save you some time. Be prepared to put the kettle on as the import/export process can take some time. Now that the website and all associated records are in CRM, how do you set this as your live website? You will need to go to the configuration page for your CRM portal, and change your website record to point to your newly imported website. This is the same page you see when you first setup your portal, and can be accessed from the CRM Online Administration Center -\u0026gt; Applications and then Manage from your Portal Add-On subscription:\nThen select your newly imported website from the Update Portal Binding drop-down:\nNote that your changes may not take effect immediately after saving and that you will likely need to attempt the old \u0026ldquo;turning it off and back on again\u0026rdquo; trick using the Change Portal State dropdown:\nAs an additional bonus, the tool can also be used in the following scenarios: Backup an Adxstudio website to a CRM Online portal deployment Backup a CRM Online portal to an Adxstudio website (On-Premise/Online CRM) To prove this, I did a test importing the ghastly looking portal, previously created as part of a previous post on Bootstrap templates. This was originally created using Adxstudio and I was able to successfully import this into CRM portals, in all its horrid glory 😱\nBy covering both of the above scenarios, the tool instantly becomes a lot more powerful and versatile - enabling you to very quickly get your existing Adxstudio website setup as a CRM portal site. It also gives portal developers the flexibility to setup their own development Adxstudio environment, safe in the knowledge that they can straightforwardly migrate these across to CRM portals when they are production ready. Note that the above test does not confirm whether or not bespoke Adxstudio customisations via custom generated ASP.NET pages etc. will definitely migrate across 100% to CRM portals and that the steps involved in changing/modifying the bindings for Adxstudio differ significantly from CRM portals.\nConclusions or Wot I Think CRM portals, as a product offering, is still in its early life cycle stages. As such, it is reasonable to expect that clear and broad documentation that covers the types of scenarios discussed in this blog post (e.g. managing CRM portal development environments) is not yet forthcoming and that some trial and error may be involved in figuring what to do (it wouldn\u0026rsquo;t be fun otherwise 🙂). What is good to know is that Microsoft has not made a conscious decision to restrict or remove some of the existing tools available for Adxstudio and that they \u0026ldquo;just work\u0026rdquo; with CRM portals. This is perhaps a rather obvious assumption, given that both products are technically identical. Nevertheless, it is good to know that those who are venturing into CRM portals for the first time can very easily get running with tools, like the WebsiteCopy Tool, when planning, developing and rolling out solutions for businesses who have taken the plunge early on with CRM portals.\n","date":"2016-10-16T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/using-the-websitecopy-tool-with-crm-portals/","title":"Using the WebsiteCopy Tool with CRM Portals"},{"content":"Another big Microsoft conference can only mean another swathe of new announcements relating to Dynamics CRM/Dynamics 365 😀. Admittedly, compared to the bombshells that dropped before and during WPC 2016, there is relatively little this time around that can match the scale and excitement of Dynamics 365. Nevertheless, there are still a number of things that CRM professionals need to keenly take note of and prepare themselves for in the future. I\u0026rsquo;ve gone through all of the major announcements during Ignite, and below is my pick of the most significant and major ones. If anyone thinks I\u0026rsquo;ve missed anything, then please let me know in the comments below!\nA new Personal Assistant for your Sales Team: The Relationship Assistant As reported by VentureBeat, Satya Nadella\u0026rsquo;s keynote speech made a brief reference to a new \u0026ldquo;relationship assistant\u0026rdquo;, that will be incorporated as part of Dynamics CRM/365\u0026rsquo;s mobile experience. Interestingly, it will initially only be offered as part of the Dynamics 365 Sales \u0026ldquo;module\u0026rdquo; only; Microsoft no doubt understand the challenges that organisations can sometimes face when colleagues in the Sales team do not always update their CRM records correctly, so targeting this demographic first will provide an excellent proving ground to determine whether artificial intelligence can overcome this hurdle.\nThe announcement lacks some of the detail you would expect and, as highlighted by Jordan on VentureBeat, the timing is not just mere coincidence:\nThe news is notable because it comes just one week after Salesforce, the biggest player in the CRM business, announced that it was bringing artificial intelligence capability — called Einstein — into its Sales Cloud. Einstein recommendations in Sales Cloud are similar to the cards that appear in Dynamics CRM\u0026rsquo;s \u0026ldquo;relationship assistant.\u0026rdquo;\nOne wonders whether the above was casually slipped into Satya\u0026rsquo;s keynote speech as a warning shot across to Salesforce. It is also unclear whether or not this tool will be released in tandem with Dynamics 365\u0026rsquo;s launch later this year, adding further weight to the argument that this was a last minute addition to the speech. Fingers crossed that we will see something more fleshed out sooner rather than later.\nAdobe/Microsoft Partnership \u0026amp; the future of Microsoft Dynamics Marketing (MDM) On first glance, the announcement concerning closer collaboration between Adobe \u0026amp; Microsoft seems rather innocuous, as it mostly concerns Adobe offering some of its more high-profile cloud offerings as part of the Azure/Office 365 \u0026ldquo;family\u0026rdquo; of products. However, this announcement comes with the following bombshell that organisations who use Dynamics Marketing, or are contemplating adopting it in the near future, need to urgently take note of:\nMicrosoft will make Adobe Marketing Cloud its preferred marketing service for Dynamics 365 Enterprise edition, giving customers a powerful, comprehensive marketing service for Microsoft\u0026rsquo;s next generation of intelligent business applications.\nNow I preface this by saying that there is no explicit statement as part of the above that proclaims \u0026ldquo;Dynamics Marketing will no longer be available to buy as an Office 365 subscription\u0026rdquo;, but the writing is clearly on the wall for Dynamics Marketing. This being the case, there are few worthwhile things to point out:\nClearly, not all Microsoft acquisitions are destined for instant or long-term success. Microsoft has clearly weighed up their options and decided that Adobe Marketing Cloud is a safer bet long term compared to Dynamics Marketing. I have taken a look previously at the history behind the MDM product, and the main thing I would highlight from this is that, from the outside, Dynamics Marketing definitely looks similar to CRM; but underneath the hood, there are a number of key and jarring differences that can make it difficult to become a Dynamics Marketing master in a short space of time. Adobe already look to have a Microsoft Dynamics CRM Connector that can be used to link across Marketing Cloud with CRM - having not used the tool myself, I cannot comment on its usability, but the key litmus test for the above announcement is how well this tool operates compared to the existing Dynamics Marketing CRM Connector. This tool I have found to be relatively straightforward and simple to configure and maintain. The success of how Adobe Cloud Marketing integrates with CRM/Dynamics 365 will be pivotal in determining whether this new partnership blossoms or wilts. Adobe presents an interesting choice for partnering, with some of the commentary surrounding the announcement pitching this as an epic battle between Microsoft/Adobe on one side versus Salesforce/Oracle on the other. If asked what Adobe is best known for, generally you would mention one of its many design or productivity products, not their solutions for Marketing. The areas where Adobe Marketing Cloud can win over Dynamics Marketing will be crucial, particularly in terms of campaign automation, email design and lead generation. If Marketing Cloud can take the best of these features from Dynamics Marketing, sprinkled with some of the design \u0026amp; productivity elements that are well-known from their other products, then the product could succeed significantly compared with Dynamics Marketing. We are still another month or so away from the official release of Dynamics 365, so I am eagerly awaiting further detail on this key announcement - and in hopefully being able to set up a self-managed trial of Adobe Marketing Cloud and see what it can do in tandem with CRM.\nAnd the rest\u0026hellip; Here\u0026rsquo;s my pick of other interesting announcements made during Ignite, that may have some bearing on CRM/Dynamics 365 in the near future:\nWindows Server 2016 finally has a somewhat definitive release date of \u0026ldquo;mid-October 2016\u0026rdquo;, and it is great to finally see that this will be released before the year is out. I have always found Windows Server 2012 to be somewhat strange to use, thanks in part to Windows 8 inspired Start Menu. Having a desktop experience that is virtually identical to Windows 10 will be a breath of fresh air. A new Server operating system will therefore likely mean that the next major version of On-Premise Dynamics CRM/365 will support this operating system, so getting up to speed with the new Server OS will be essential for those who intend to adopt the next, major version in the near future. There has been no news or confirmation from Microsoft whether existing or previous versions of On-Premise CRM will be updated to support Windows Server 2016, so those looking for a quick upgrade of their Server 2012\u0026rsquo;s should hold off until (and if) this becomes apparent. There was some interesting news and developments relating to SharePoint/OneDrive, that I am really looking forward to. For example, the ability to sync SharePoint libraries as part of the new next-generation client, push notifications to mobile devices and the ability to download multiple files to a .zip file. Because SharePoint, and now OneDrive, can be integrated closely within CRM, any developments that improve the experience and usability of working with your CRM documents will hopefully drive organisations towards utilising these services for their CRM document management, as opposed to just using the \u0026ldquo;out of the box\u0026rdquo; solution. The rebranding of Enterprise Mobility Suite, and the introduction of the new EMS E5 plan, can only mean good things for Microsoft\u0026rsquo;s cloud services generally, and in particular CRM Online; organisations can very quickly and confidently set up advanced and highly secure environments for their online identities, and in the process take full advantage of the full suite of Microsoft\u0026rsquo;s cloud offerings within Office 365 and, eventually, Dynamics 365. ","date":"2016-10-09T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/lighting-a-fire-under-dynamics-365-a-post-ignite-2016-review/","title":"Lighting a fire under Dynamics 365? A Post Ignite 2016 Review"},{"content":"The good thing about working with CRM currently? The product is really starting to come into its element, as with each new release, more functionality is introduced that develops CRM further and helps plug the gap between some of Microsoft\u0026rsquo;s major competitors in the CRM/ERP space. The downside of this is that you constantly have to keep up to date with everything that is happening! News of acquisitions, the introduction of additional products/features for CRM and even complete overhaul announcements à la Dynamics 365 seem to be happening daily now; if you blink, then you will almost certainly miss something. Acquisitions present the most opportunity for the CRM product to move forward, enabling Microsoft to very quickly take a product that provides a specific function or feature and rapidly integrate it with CRM. In this week\u0026rsquo;s blog post, we will take a look back at 6 acquisitions made by Microsoft that were directly focused towards, influenced by or had a lasting impression on Dynamics CRM. This list is by no means exhaustive but merely represents what I think are the most significant acquisitions to date - with no doubt more to come in future. 🙂\nYammer Originally founded in 2008, Yammer is a social network for businesses, that encourages closer collaboration with colleagues and provides a social networking experience that should very quickly feel familiar to new users of the application. Microsoft announced its acquisition of Yammer on June 25th, 2012, with Microsoft and Steve Ballmer providing definitive indications of where they saw the future direction of Yammer as part of its budding cloud portfolio:\n\u0026ldquo;Yammer adds a best-in-class enterprise social networking service to Microsoft\u0026rsquo;s growing portfolio of complementary cloud services.\u0026rdquo;\nWhilst it is arguable that Microsoft\u0026rsquo;s primary motivation for acquiring Yammer was Dynamics CRM, it is worth a mention purely for the fact that Yammer was rather speedily integrated with Dynamics CRM 2011 Online as part of one of the last major updates to CRM 2011. Since then, Yammer integration has been a staple feature of Dynamics CRM, allowing administrators to replace their default social feeds with Yammer groups. I am a huge fan of Yammer and really rate the benefits that it can deliver to businesses who wish to increase employee engagement.\nMicrosoft Dynamics Marketing (MarketingPilot 2012) Whereas the motivations for Yammer\u0026rsquo;s acquisitions were mixed, the fact that MarketingPilot\u0026rsquo;s acquisition was announced by then Corporate Vice President of Dynamics CRM, Bob Stutz, gives you all the indication you need. On the date of this acquisition (17th October 2012), you would be forgiven for not knowing just who MarketingPilot were, particularly if you lived outside of the US. MarketingPilot was originally touted as offering solutions for two scenarios - Integrated Marketing Management \u0026amp; Advertising Agency Management - with a feature list that should look all too familiar to those who have worked closely with Dynamics Marketing.\nInterestingly, Microsoft did not disclose the amount that MarketingPilot was sold for, whereas the full $1.2 billion cash price was mentioned prominently for Yammer. Since its acquisition, MarketingPilot has been rebranded and placed firmly within the Dynamics range of products and has had an interesting transformational journey. Whilst they were largely successful in getting the \u0026ldquo;front\u0026rdquo; of the application looking similar to how Dynamics 2013 and future versions operate, you can find many clues of its former status as a separate product as you begin to dig deeper within the application. Nevertheless, Microsoft has done a good job providing straightforward and easy integration between Dynamics Marketing \u0026amp; CRM, providing organisations with a fully supported email marketing solution that offers functionality not present within CRM; the most important of these being, for example, the ability to send HTML emails.\nSocial Engagement AKA Social Listening AKA NetBreeze 2013 This next one wins the award for the most number of product name changes I think 😀 What is also noteworthy about this acquisition is that it is the first on our list to involve a company based outside of the US, as Netbreeze Gmbh was a Swiss company. And, as before, Bob Stutz making the announcement regarding the acquisition on the 19th March 2013 tells us everything we needed to know at the time in regards to the motivation and direction of Microsoft. Indeed, as Bob said at the time:\nThe way we will make this technology available makes this acquisition even more compelling. We are going to provide access to this rich data across your marketing, sales and service teams so that each and every person using your CRM system can have this consumer insight at their fingertips. Delivering these capabilities is a critical part of our social strategy.\nSocial Engagement is a product that I have not had much experience using on a day-to-day basis. What is most striking about the product is the speed in which it has been tightly integrated as part of official Dynamics CRM curriculum. Anyone who harbours ambitions towards obtaining either MB2-710 or MB2-706 in the near future will almost certainly need to have more than just a passing awareness of what the products are, as candidates are expected to know how Social Engagement integrates with CRM and provide more than just a basic summary of the product.\nFieldOne FieldOne presents an excellent fit for CRM, both in terms of expanding case management functionality beyond typical service desk environments and providing an enhanced mechanism of managing teams, qualifications, and diaries across large organisations. So it was no surprise last year that this became the latest product to be acquired - and to be specifically earmarked for inclusion within Dynamics CRM:\n[FieldOne\u0026rsquo;s] built on Microsoft technology for fast integration\u0026hellip;and has cross-platform capabilities meaning it can work on different devices enhancing the mobile experience which is so critically important in field service management. FieldOne was built from the ground up to leverage Dynamics CRM, and this means that our customers can take advantage of its capabilities right away.\nWhat I most like about the product is its ability to visualise schedules/job boards via an interactive map and its integration with mobile app push notification/SMS message, for both customers and colleagues. Microsoft still look to be getting their house in order, with respect to how this product fits in within CRM - although it is available to organisations to begin using, it is likely that we\u0026rsquo;ll see a major push forward as part of the Dynamics 365 release later on this year and with some new, dedicated curriculum/certification exams available for professionals to (rapidly!) get to up to speed with the product.\nAdxstudio On September 28th last year, Microsoft announced it had acquired the technology and product offerings of Adxstudio. At the time, there were already a number of products purporting to offer self-service portals integrated tightly with CRM; the acquisition is not surprising, therefore, considering such clear demand for a product offering like this. In addition, Microsoft\u0026rsquo;s choice of Adxstudio presents a safe bet, given that the products foundation is rooted firmly within the \u0026ldquo;Microsoft stack\u0026rdquo;. What is most noteworthy to mention as part of this is the speed in which Microsoft \u0026ldquo;changed the window dressing\u0026rdquo;; as part of the CRM 2016 Spring Wave release, Microsoft began offering Portals as a paid add-on to Dynamics CRM Online subscriptions. Perhaps due to the rather speedy nature of this rollout, the experience and available features differs significantly from what you are able to do with AdxStudio. For example, you have no access to the backend portal site (which would be crucial if, for example, you wish to customise the portal sign in page) and are limited to customising a set of pre-defined portal templates that cover typical scenarios for businesses. As part of the next major update to CRM with the release of Dynamics 365, we should expect to see a number of new features added to portals that will hopefully address some of these issues and bridge the gap accordingly.\nLinkedIn We save the best until last 😉 Although there has been very scant information relating to Microsoft\u0026rsquo;s most recent acquisition, Satya Nadella\u0026rsquo;s open letter to Microsoft employees gives us a flavour of Microsoft\u0026rsquo;s plans for the acquisition:\nAlong with the new growth in our\u0026hellip;Dynamics businesses this deal is key to our bold ambition to reinvent productivity and business processes. Think about it: How people find jobs, build skills, sell, market and get work done and ultimately find success requires a connected professional world.\nIt requires a vibrant network that brings together a professional\u0026rsquo;s information in LinkedIn\u0026rsquo;s public network with the information in Office 365 and Dynamics.\nThere is clearly a significant correlation between this deal and the recently announced Dynamics 365. Expect to see LinkedIn to form a major part of the new suite of products announced as part of this, either when Dynamics 365 is released or early next year.\n","date":"2016-10-02T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-ever-changing-face-of-crm-6-dynamics-crm-focused-acquisitions/","title":"The Ever Changing Face of CRM: 6 Dynamics CRM Focused Acquisitions"},{"content":"Whenever you create a new Entity within CRM, you generally find yourself going through the same rhythm each time, both in terms of the general approach from a planning perspective and then the setup involved just after you hit \u0026lsquo;Save\u0026rsquo; when creating the new Entity. Whilst often somewhat laborious, these steps are essential in ensuring that your Entity is configured in a professional and clear way. Anyone who works often within the Customizations area within CRM should have a well-defined \u0026ldquo;checklist\u0026rdquo; of items that are worked through as a minimum after an Entity is created. In this week\u0026rsquo;s blog post, we will focus on what I think are the most important things you have to look at doing once a new Entity has been created. Note that the below checklist should not be taken as \u0026ldquo;gospel\u0026rdquo; for what you should be doing, and there are a few things below that are more personal preference based as opposed to being absolutely required. I would be interested to hear if anyone has their own checklist items they follow when creating new Entities in the comments below, so please feel free to share 🙂\nDecide Options for Entity You have a number of different options you can specify on Entity creation, which facilitates additional functionality such as duplicate detection, support for Business process flows etc:\nSince a lot of these options are irreversible once you select them, it is generally better not to add them in when your Entity is first created, unless you are absolutely sure you need them. Enabling them from the outset will potentially add a number of additional attributes, which may impact on performance long-term. As a minimum, however, I would suggest that the Duplicate detection option is always selected. This is because you will almost always want to configure some kind of duplicate detection rules on your data, in order to ensure your CRM does not get clogged up with useless or unnecessary data.\nGive everything within your Entity a clear and meaningful description Regardless of whether it\u0026rsquo;s a form, attribute or a view, you should always ensure that a Description field is supplied with an appropriate 1 or 2 lines of text that describes what it is and what it is doing. You should never assume that it is obvious what a particular field is doing, and you are potentially doing a disservice to yourself, colleagues and your end-users by not taking the time to complete this step. My preference is always to be borderline obsessional in this regard, as past experience has taught me the invaluable nature of clear documentation and how my extra effort at the outset can potentially save hours for someone down the line.\nThe Curious Case of the Name field Every custom Entity needs to have a Name, or Primary, field, which is specified on creation. You are limited in how this is customised, which can lead to trouble deciding how best to utilise it:\nWhere possible, you should customise the Name field so that is storing some required value on your new Entity. For example, if you have a new Event Entity, then you can use the Name field as the Event Name field. If it\u0026rsquo;s the case that you decide not to use the Name field at all or are unable to find a suitable use for it, then you will still need to ensure that this is populated with a value of some kind. The reason? On the record page itself, CRM will populate the field with a rather undescriptive and generic value:\nNow you could just force it so that end users populate this field with a value, but this in my view creates unnecessary data inputting steps and could lead to significant discrepancies within the database on how this data looks. It is better to create a workflow that populates this with a consistent value, that references other fields that will always contain data of some kind. The benefit of this route is that, if these fields are ever changed on the record, you can configure your workflow to run again to update the Name field accordingly.\nRename the Main form and update the description I find it really annoying and strange that the default form created for new entities is called \u0026lsquo;Information\u0026rsquo; as opposed to just \u0026lsquo;Main\u0026rsquo;, so I generally always rename this accordingly, updating the description of the form as well in the process.\nTidy up the default System Views This step will generally need to be done after you have created all of your Entity attributes. When you first create your Entity, CRM will automatically create the following views for you:\nTwo default public views - one that shows all Active records and another that shows Inactive records. An Advanced Find View - This is used whenever a user clicks the Create Personal View button or selects [new] from Advanced Find when creating a new view. Essentially, it is used as a default template to assist users when creating new views. An Associated View - Shown when the user clicks on the related records button on the sitemap from a form. A Lookup View - Shown when the user clicks on the \u0026lsquo;Look Up More Records\u0026rsquo; button on a lookup field A Quick Find View - This view is displayed when a user performs a Quick Search function from the Entity view page. The important thing to remember with this is that only the fields that are specified under Add Find Columns will be searchable via a Quick Search, so you will almost certainly need to customise this at some stage. The views are always populated with the Name field and the Created On field. You will need to ensure that all of these views are modified so they contain the information that is most pertinent about your new Entity. If you are not using the Name field at all, then removing it from these views would be sensible, to ensure that end-users are not confused by a list of Entity records with a blank value being displayed.\nUpdate your Security Roles accordingly This can be sometimes easy to overlook, but unless you have configured your organisation\u0026rsquo;s security roles to include all relevant permissions, then your users won\u0026rsquo;t even be able to see, let alone use, your new Entity! So you should always ensure that your organisation\u0026rsquo;s security roles are updated to reflect the data access requirements of your business.\nConclusions or Wot I Think Having a consistent approach when it comes to designing a CRM system is important for a number of reasons. It helps to inform and shape best practice approaches within a business, enables you to ensure a minimum level of quality as part of your solution and is essential when working as part of a team to ensure that team members are able to easily go into a solution and understand how everything operates. By combining together past experience, mixed together with well-understood best practice approaches, it can be very straightforward to put together a set of items that need to be checked-off as part of creating a new Entity, and will give everyone within a team the confidence and surety that they are customising CRM in the correct manner.\n","date":"2016-09-25T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-new-entity-checklist-6-things-to-do-after-creating-a-new-entity/","title":"The New Entity Checklist: 6 Things To Do After Creating a New Entity"},{"content":"I have been really excited recently, as I began working more closely with ADXstudio \u0026amp; CRM Portals. This was perhaps one of the more exciting new features introduced as part of the CRM 2016 Spring Wave earlier this year and presents a major step forward for the CRM product. Now, businesses can start to leverage their CRM and its data in developing professional, accessible, customer and/or partner facing websites that integrate natively with your existing CRM system and processes.\nOne of the first challenges when getting to grips with portals is how you go about changing the OOB design of your site - for example, how do you modify the colour for a background or a particular button? Both products utilise Bootstrap to enable portal designers to very quickly develop professional looking portals that are customised to suit particular businesses design/theming preferences. In addition to this, as outlined on ADXstudio\u0026rsquo;s website:\nBy using Bootstrap\u0026rsquo;s layout system, it\u0026rsquo;s possible to develop a single site that presents an appropriate interface to all devices your customers might use.\nIn the days where mobile responsiveness is an absolute requirement for website projects, Bootstrap provides a framework that ensures a consistent UI experience is maintained at all times for your end users. But, if you are scratching your head at just how to start working with Bootstrap (like I recently was!), it can be a major hurdle figuring out how to begin styling your portal. In this week\u0026rsquo;s blog post, I will take a closer look at what Bootstrap is, what \u0026ldquo;shortcuts\u0026rdquo; are out there that can greatly speed up developing your first Bootstrap template and how you go about applying this to your custom portal site:\nWhat is Bootstrap? Bootstrap was originally developed by Twitter but has since become an open-source project, on its 3rd major version. The Bootstrap website contains a nice little summary around the history of the project and its interesting journey from internal project to open-source release:\nBootstrap was created at Twitter in mid-2010 by @mdo and @fat. Prior to being an open-sourced framework, Bootstrap was known as Twitter Blueprint. A few months into development, Twitter held its first Hack Week and the project exploded as developers of all skill levels jumped in without any external guidance. It served as the style guide for internal tools development at the company for over a year before its public release, and continues to do so today.\nSource: http://getbootstrap.com/about/\nI was actually surprised to learn about its popularity, having not had any previous exposure to it, and its status as the almost de-facto standard in website design circles.\nCreating your first Bootstrap Template Microsoft\u0026rsquo;s article on portal theming (replicated from ADXStudios original article) suggests a few different websites to try in the first instance. We\u0026rsquo;ll take a closer look at 3 of the websites mentioned here that enable you to customise a Bootstrap template from scratch, and then at an alternative route not mentioned on either website:\nOfficial Bootstrap Customizer Via the official Bootstrap website, you can create your own custom bootstrap template. As pointed out in the above articles, there is no GUI interface that lets you preview your customised template; which means you have to download and apply the Bootstrap files to your website in order to get a feel for how it looks. If you\u0026rsquo;re feeling particular masochistic, then this is definitely the route for you 🙂\nBootSwatchr When you first start using BootSwatchr, you immediately warm to it straight away - it provides a WYSIWYG editor, enabling you to very quickly tinker around with the various settings on the default Bootstrap template, and see how they will look in a browser. VERY nice! However, I have encountered some issue using the tool on Internet Explorer/Microsoft Edge, particuarly when it comes to the most crucial bit - exporting your BootStrap template .css file using the \u0026lsquo;Get CSS\u0026hellip;\u0026rsquo; button. This is definitely one of the tools you should check out in the first instance, but just be aware that it may not work correctly on your browser of choice.\nBootTheme What is encouraging when you first start using BootTheme is that it looks to be constructed in the same vein as BootSwatch. However, I have had real trouble figuring out how to use this tool, as it is initially quite daunting figuring out where to start. I think with some dedicated time and learning, this could be a really effective tool to use when building your Bootstrap templates, but perhaps not the best beginners tool.\nPaintStrap/COLOURLovers These tools are the ones that I have used when creating Bootstrap templates, and used in tandem, are quite effective in creating a Bootstrap template that conforms to specific branding requirements. You start off by creating a ColourLovers account, which lets you then create a \u0026ldquo;colour scheme\u0026rdquo; that can be used in PaintStrap. A \u0026ldquo;colour scheme\u0026rdquo; is essentially a collection of up to 5 different colours that you want to use on your BootStrap, which can be generated very easily using the ColourLovers website. Simply go to Create -\u0026gt; Palette, specify your colours, give it a descriptive name and save onto your profile:\nOnce you\u0026rsquo;ve saved your COLOURLovers template, you then copy across the 6 digit code for the template into the PaintStrap 3 step wizard. This can be found within the URL of your selected template:\nWhich is then entered onto PaintStrap:\nThen, on Step 2, you can start to customise which colours will appear where on your BootStrap - the nice thing being that you are not restricted to just the colours on your template and you can get a partial and full-screen preview of your template as you build it:\nWhen you are happy with your template, click the \u0026lsquo;Generate CSS!\u0026rsquo; button to download your BootStrap .css file. You will want to grab the bootstrap.min.css file; this is exactly the same as the bootstrap.css file but has been slimmed down to remove line breaks, whitespace etc.\nOnce you\u0026rsquo;ve got your Bootstrap, how do you apply it to your portal? Let\u0026rsquo;s assume we are using the Basic Portal, provided as part of ADXStudio; the steps are the same for CRM Portals, rest assured:\nNavigate to the home page of your portal, ensuring that you are logged in as a Contact that has the Administrators Web Role. On the ADX widget on the top right of your page, click on New -\u0026gt; Child file:\nYou\u0026rsquo;ll be greeted with the Create a new child file screen. Populate the details as follows:\nName: bootstrap.min.css Upload File: Browse and select your bootstrap.min.css file Partial URL: bootstrap.min.css Hidden from Sitemap: Tick the box It should look something like this:\nPress Save and then refresh your browser. Your new bootstrap will be applied to your site; and, because we have configured it on our home page, it will cascade automatically across our portal site:\nLooks\u0026hellip;err\u0026hellip;interesting! I would not recommend or endorse creating a bootstrap that looks like this, but this example provides an excellent illustration of the versatility of Bootstrap.\nGetting up to speed with Bootstrap may look quite daunting initially, but fortunately, there are lots of tools and resources available online that can get you running quickly with BootStrap. These tools can significantly ease your learning journey with ADXstudio/CRM Portals and also allows you to look like a website design whizz in the process 😀\n","date":"2016-09-18T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/getting-started-with-portal-theming-adxstudiocrm-portals/","title":"Getting Started with Portal Theming (ADXStudio/CRM Portals)"},{"content":"One factor that anyone considering CRM Online for their business would need to address is determining the amount of data you are intending to bring across into the CRM and how you expect this to expand in the long term. CRM Online includes 5GB of database storage for all of your CRM instances on an Office 365 tenant, which is included as part of the base plan (i.e. 5 Professional Licenses). If you run out of storage, there are only two ways in which you can increase this:\nBy purchasing 20 Professional user licenses, you will be granted an additional 2.5GB storage. This is cumulative up to a maximum of 50GB (e.g. if you add an additional 20 Professional licenses to your existing 400, then your storage will not increase further) Additional storage can be purchased, at a whopping £6.20 per GB Based on the above costs, a careful analysis needs to be performed to ensure that a migration does not end up costing more than expected; either in the immediate or long term. In any event, administrators will need to keep a careful eye on their database storage levels over time by frequently reviewing the CRM Online Administration Center on their deployment:\nI was recently investigating why a particular CRM Online deployments storage had ballooned from a rather modest 0.75GB all the way up to nearly 4GB - all over the space of a few months! After doing some further digging, I determined that the Attachment entity was consuming the most amount of data in the CRM instance. This was not spotted readily, due to the rather strange way that this entity works within CRM, which I wanted to highlight and discuss further as part of this weeks blog post:\nWhy should administrators regularly check Attachment Entity records? Every time an email is automatically synced into CRM, either with Server-Side Synchronisation or via CRM for Outlook, all the attachments are also brought into CRM. This will include your typical files (a .pdf document, an Excel worksheet etc.); but may also include images from your email signature or any other content on the email that is not stored in plain text. Over time and, in particularly large deployments, this can start to eat away at your database storage very quickly. Ignore the impact that this entity can have on your CRM deployment at your peril\u0026hellip;\nWhy administrators may struggle to regularly check Attachment Entity records\u0026hellip; The attachment entity cannot be searched using Advanced Find. It\u0026rsquo;s just not there on the list of entities available to query:\nThe only way within the CRM interface to get a rough indication of how many attachment records are stored within the database is by running an Advanced Find Query against the Email entity, filtering out only email records that have a related attachment record:\n(You can then customise the File Size (Bytes) filter to search for emails where the attachment is over a certain size - then, drill down in the email record to see the actual Attachment records).\nSo, what other options are available to us? You can use FetchXML to query the data on an Online/On-Premise deployment, but you are restricted in what information is returned. Only the following fields will be returned by a FetchXML; even if you specify, for example, the activityid field, it will never be returned in your request:\nmimetype filename attachmentid body filesize So with these restrictions in mind, you can run the below FetchXML query to return all attachment records where the file size is over 50MB:\n\u0026lt;fetch\u0026gt; \u0026lt;entity name=\u0026#34;attachment\u0026#34; \u0026gt; \u0026lt;attribute name=\u0026#34;attachmentid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;filename\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;filesize\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;and\u0026#34; \u0026gt; \u0026lt;condition attribute=\u0026#34;filesize\u0026#34; operator=\u0026#34;gt\u0026#34; value=\u0026#34;10485760\u0026#34; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;order attribute=\u0026#34;filesize\u0026#34; descending=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; To actually delete the record, you would have to either locate the record within CRM, a potentially laborious task. Unfortunately, there is no way to navigate to the record using query parameters appended to your CRM URL or to even use a Delete request in the SDK to remove the records. If you are running On-Premise CRM, then you could always run this manual query against the CRM organisation database:\nUSE MyCRM_MSCRM GO DELETE FROM dbo.ActivityMimeAttachment WHERE AttachmentID = \u0026#39;B18FBFCA-046D-E611-80CB-00155D02DD0D\u0026#39; I am not sure whether this a supported way of using the application, so use at your own risk 😉\nBe Proactive: 2 Tips to reduce the Attachment Data Footprint To avoid a situation where you are having to go through the above to delete unnecessary email attachments, there are 2 things you can do to try and avoid this entity from swelling up unexpectedly:\nIn Settings -\u0026gt; Administration -\u0026gt; System Settings, on the General tab, you can specify the list of blocked file extensions for attachments. Out of the box, CRM automatically populates this with a list of the most harmful file types, as a semi-colon delimited list: You are able to freely modify this list to include/remove the list of attachments that you want CRM to automatically strip out. So if for example, your organisation uses high resolution .png images as part of your email signatures, this may be a good candidate to add to this list. 2. Staying within the System Settings area, but this time jumping across to the Email tab, you have an option where you can specify the maximum file size for all attachments saved in the system:\nBy default, this is set to 5.1MB - I would not recommend increasing this to any large degree, and there may be a case to reduce this further to around the 2-3MB mark.\nConsider all things as well - how essential is it that any attachment is stored within CRM in the first place? Certain organisations may use separate email archiving solutions, that automatically backup and store and all emails sent across an organisation into an archive that can then be accessed by end users and (most importantly) system administrators. Examples of these solutions may include Exchange Online Archiving, MailStore, Mimecast or Metalogix. If you know that your organisation has one of these solutions in place and that emails are being backed up/archived with all of their appropriate attachments, then it may be prudent to block CRM from storing any type of attachment file altogether. On the flip-side, if your organisation does not have such a solution in place, then this can work in reverse - CRM could act as an excellent way of implementing an email archiving-lite solution for your business. Although, I would expect that this is only practical for an On-Premise deployment, where your storage costs will be typically more cost-effective compared to CRM Online.\n","date":"2016-09-11T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/limitations-of-the-attachment-entity-in-dynamics-crm/","title":"Limitations of the Attachment Entity in Dynamics CRM"},{"content":"I was writing some code recently for a CRM plugin, that involved working with multiple link entity attributes as part of a QueryExpression. Accomplishing this is relatively straightforward, thanks to some of the code samples provided within the SDK. For example, the following sample code gives you everything you would need; how to build the query and, more importantly, include the attributes you would need to return as part of the request:\n//Create a query expression specifying the link entity alias and the columns of the link entity that you want to return QueryExpression qe = new QueryExpression(); qe.EntityName = \u0026#34;account\u0026#34;; qe.ColumnSet = new ColumnSet(); qe.ColumnSet.Columns.Add(\u0026#34;name\u0026#34;); qe.LinkEntities.Add(new LinkEntity(\u0026#34;account\u0026#34;, \u0026#34;contact\u0026#34;, \u0026#34;primarycontactid\u0026#34;, \u0026#34;contactid\u0026#34;, JoinOperator.Inner)); qe.LinkEntities[0].Columns.AddColumns(\u0026#34;firstname\u0026#34;, \u0026#34;lastname\u0026#34;); qe.LinkEntities[0].EntityAlias = \u0026#34;primarycontact\u0026#34;; EntityCollection ec = _orgService.RetrieveMultiple(qe); Console.WriteLine(\u0026#34;Retrieved {0} entities\u0026#34;, ec.Entities.Count); foreach (Entity act in ec.Entities) { Console.WriteLine(\u0026#34;account name:\u0026#34; + act[\u0026#34;name\u0026#34;]); Console.WriteLine(\u0026#34;primary contact first name:\u0026#34; + act[\u0026#34;primarycontact.firstname\u0026#34;]); Console.WriteLine(\u0026#34;primary contact last name:\u0026#34; + act[\u0026#34;primarycontact.lastname\u0026#34;]); } Very handy! But there are two problems that you may encounter when attempting to run the code yourself:\nThe code snippet attempts to print out the value of the LinkEntity First Name and Last Name Contact fields to the console. So, for example, when your code retrieves the Fourth Coffee (sample) record, we would expect to see Rene Valdes (sample) printed out. Unfortunately, this doesn\u0026rsquo;t happen, and we are instead greeted with a value that may precipitate some serious head-scratching: The issue lies in how the SDK handles link-entity attributes included as part of a query. Instead of returning the value in the data type we would expect and, despite appending the appropriate namespace within our code to indicate the link entity our attribute is from (in this case, primarycontact), CRM instead gives us the attribute as part of an AliasedValue object. The comforting thing to know is that the attribute value that we may (desperately!) want to grab is in this object somewhere, and we will look at the best way of accessing this later on in this post.\nPutting aside the above issue, let\u0026rsquo;s assume that we weren\u0026rsquo;t working with an AliasedValue and just a standard CRM data type - for example, Single Line of Text/string. Our above code would work fine against all of the sample data records within CRM. But if we add a new Account record and add a Primary Contact record that doesn\u0026rsquo;t have a First Name value, we get an error thrown within our code when it attempts to output this value into the console: Regular readers of the blog may be able to guess what is happening - as part of one of my previous posts looking at the GetAttributeValue method, we looked at the danger of accessing attributes via the Entity class, particularly when these attributes may end up containing blank values. In this particular example, there is another factor at play too - when a QueryExpression attempts to bring back all of the requested attributes, any attribute(s) that do not contain a value in the CRM database (i.e. are NULL) are not returned at all. This fact is useful in ensuring that our code can run as optimally as possible, but in this case, is our downfall - because our Entity object has no Contact attribute with a First Name value, no value is available to return, so our code panics and throws an error; for the simple reason that it is attempting to access something that does not exist.\nSo how can we get around both of these issues? First of all, we need to alter the code so that the actual value of the First Name and Last Name fields are displayed correctly. One way of executing this is to explicitly cast our attribute values as AliasedValues, access the inner properties of the object and then cast to string the value of the attribute. An updated version of our code would, therefore, look like this:\n{ Console.WriteLine(\u0026#34;account name:\u0026#34; + act[\u0026#34;name\u0026#34;]); Console.WriteLine(\u0026#34;primary contact first name:\u0026#34; + ((AliasedValue)act[\u0026#34;primarycontact.firstname\u0026#34;]).Value.ToString()); Console.WriteLine(\u0026#34;primary contact last name:\u0026#34; + ((AliasedValue)act[\u0026#34;primarycontact.lastname\u0026#34;]).Value.ToString()); } Running this against our Sample Account/Contact results returns what we expect, which is good 🙂\nBut, unfortunately, does not give us a complete solution to the above as we still hit a \u0026ldquo;The given key was not present in the dictionary.\u0026rdquo; error message as before, for the same reasons:\nThe above method is not satisfactory then, as we would need to include additional lines of code to handle the fact that the attribute value is not present - an if\u0026hellip;else statement or a try\u0026hellip;catch block. If we forget to put this in, then another issue surfaces in bad code potentially seeping into a Production environment - causing errors for end-users and hours of unnecessary debugging.\nOur old friend GetAttributeValue has saved the day for us previously; can it do again? We can try by using the code snippet below - specifying the fact that we are returning an AliasedValue object and then getting the inner attribute Value:\nConsole.WriteLine(\u0026#34;account name:\u0026#34; + act[\u0026#34;name\u0026#34;]); Console.WriteLine(\u0026#34;primary contact first name:\u0026#34; + act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.firstname\u0026#34;).Value); Console.WriteLine(\u0026#34;primary contact last name:\u0026#34; + act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.lastname\u0026#34;).Value); The code errors again, unfortunately, with a different error message, indicating that a NULL value is attempting to be accessed:\nIn this case, we must rely on our knowledge of C# to save the day, with a few options at our disposal. I\u0026rsquo;ve already suggested two possible options, but another could be considered, which would reduce the number of lines in our code. By using the conditional operator (?:), we can \u0026ldquo;test\u0026rdquo; for a NULL value in our GetAttributeValue and, if a NULL value is present, substitute it for an empty string or a value of our choice. Our final, error-free code, would look like this:\nforeach (Entity act in ec.Entities) { Console.WriteLine(\u0026#34;account name:\u0026#34; + act[\u0026#34;name\u0026#34;]); Console.WriteLine(\u0026#34;primary contact first name:\u0026#34; + (act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.firstname\u0026#34;) == null ? \u0026#34;\u0026#34; : act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.firstname\u0026#34;).Value)); Console.WriteLine(\u0026#34;primary contact last name:\u0026#34; + (act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.lastname\u0026#34;) == null ? \u0026#34;\u0026#34; : act.GetAttributeValue\u0026lt;AliasedValue\u0026gt;(\u0026#34;primarycontact.lastname\u0026#34;).Value)); } Then, just to be sure, we run a quick test and confirm everything works as expected:\nNow we can rejoice that our code is error free and that we\u0026rsquo;ve found another good example of how the GetAttributeValue method should always be used when working with CRM attributes. It is a shame though that the code above, provided by Microsoft as part of the SDK, has such a significant error within it. Hopefully, it will be addressed as part of a future version of the SDK and we should be thankful that we at least have all of the sample code in the SDK in the first place; it gives developers and CRM customisers a great starting point to start developing consistent and supported code.\n","date":"2016-09-04T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/beware-the-aliasedvalue-attribute-dynamics-crm/","title":"Beware the AliasedValue Attribute (Dynamics CRM)"},{"content":"A colleague recently asked me this question, and I\u0026rsquo;ll admit that it took me a few minutes to think about the answer. Learning how to write code that extends functionality within or outside of CRM is not something that you can just pick up from scratch. You usually need to have good experience with coding first, before you can safely venture into writing your first plugin or form level JScript function. Fundamental, and arguably, crucial knowledge of the CRM platform is essential too, as this ensures that you don\u0026rsquo;t put forward solutions that the application can handle natively. In this week\u0026rsquo;s blog post, I will first clarify what CRM Development actually means, before outlining my \u0026ldquo;top tips\u0026rdquo; on how you can develop your skills to become a superstar CRM Developer.\nSo what is CRM Development, and is it the same as CRM Customisation? It\u0026rsquo;s important that we first clarify what the difference is between these two types of activities, as although there is some cross-over, often they are split out into two distinct roles - a CRM Customiser and CRM Developer. Someone who occupies the first of these roles frequently spends the majority of their time working with CRM Solutions and the Customizations area of CRM. Customisers will commonly be involved in the creation of new entities, fields, system views, processes, business rule \u0026amp; workflows, to name a few. As a consequence, they will more often than not have a great deal knowledge of what the platform is capable of and are generally in the best position to offer support and mentoring to colleagues who are struggling with something in CRM (for example, how to create a personal view).\nIn comparison, a CRM Developer may spend very little time working with solutions and customisations; although they will be expected to have a general awareness of what the platform is capable of doing, they will mostly only ever be concerned with modifying plug-ins, plug-in steps and web resources from within solutions. CRM Development instead encompasses a broad canvas of work, all of which is geared towards extending the native functionality of the application. An example list may include:\nWriting form level JScript, for scenarios where a Business Rule can\u0026rsquo;t achieve the desired results (we\u0026rsquo;ve already learned the importance of considering Business Rules as a first step option in these scenarios) Developing custom plug-ins in C#/VB.NET to execute at specific trigger points within the underlying database transaction e.g. after a Contact record has been updated. Building custom workflow assemblies in C#/VB.NET to further enhance the options available as part of a workflow or dialog. Setting up custom Web Resources in HTML/Silverlight, that can be embedded within CRM forms or dashboards. A good way of remembering the difference is to remember that CRM Customisation can all be done from within the application itself, whereas CRM Development involves work created outside the application to achieve specific business requirements.\nNow that we\u0026rsquo;ve got that out of the way, here\u0026rsquo;s my top 5 list on how you can learn CRM Development:\nLearn C# First As well as providing you with everything you would ever need from a plug-in/custom workflow development perspective, having a good grasp of C# can make learning JScript a lot more easier. Both languages have a lot of similarity, with some important differences that require noting. First, JScript is largely indifferent when it comes to working with data types, whereas C# is very fussy when it comes to declaring and casting your data types correctly. Secondly, whereas C# development work can be assisted via the use of early-bound class files, JScript can be annoyingly unsympathetic when you write code, with errors only cropping up when you attempt to run your code. Putting aside these differences though, being able to say that you have a good grasp of C# on your C.V. can assist greatly when seeking out roles involving CRM, particularly given such roles will be looking for experience of integrating CRM with third party applications; C# is your Swiss army knife in these situations.\nThe SDK is your treasure trove. There are countless number of code examples \u0026amp; snippets enclosed within the SDK, which include all of the languages that you would use to extend CRM - JScript, C# and even VB.NET! These are typically in a state where you can easily deploy them to a test CRM environment, execute them and then playback within Visual Studio via the Plugin Profiler, so you can understand what they are doing. The enclosed help file (which is replicated fully on the MSDN website) is also really detailed in explaining what you can do when developing for CRM. You can download the latest version of the SDK (updated recently for the 2016 Spring Wave) here.\nGet an MSDN Subscription I have extolled the virtues of what an MSDN subscription can provide to Microsoft professionals previously, so I won\u0026rsquo;t cover old ground. What I will highlight from this is that the Imagine Academy, included as part of a subscription, contains nearly all of the courses found on the Dynamics Learning Portal (available to CRM partners as a learning resource \u0026ldquo;hub\u0026rdquo; for all Dynamics products). It also gives you access to a number of important, developer-focused resources that you add to your arsenal and use to further enhance your knowledge of C#, JScript etc. If you\u0026rsquo;re fortunate enough to have enough money to obtain an Enterprise MSDN Subscription, or your employer has a few spare licenses, then you will be able to get your hands on a coveted CRM On-Premise license key as well. Working with the application in whatever capacity you can is the best and surest way to learn, as opposed to simply watching videos and reading online articles.\nPass those Exams There are a wide plethora of different CRM exams available to take currently, and it can be quite confusing deciding which ones will benefit you best on your road to become a CRM Developer. I would suggest that the best exams to target a passing mark on would be the following:\nMB2-712: Microsoft Dynamics CRM 2016 Customization and Configuration MB2-701: Extending Microsoft Dynamics CRM 2013 The question you may be asking though is \u0026ldquo;How important are exams, compared with actual work experience?\u0026quot;. I have heard many debates surrounding the importance of certification, on both sides of the argument; one criticism is that they are generally not a good way equipping candidates with the practical, real-life knowledge and experience that ultimately must come to the fore when working with CRM on a day-to-day basis. Another argument against them is that they can sometimes draw you towards focusing on features of a particular application that is either not very good or is done much better by an alternative product. Notwithstanding this, I think exams hold an important place in demonstrating to colleagues and potential recruiters just how serious you are about your career and in ensuring that you keep yourself up-to-date with the appropriate technology - in these modern times, staying off the ball for as little as a month can put you behind! Going back to the original purpose of this post, the curriculum on both of these exams will leave you in a position where you have achieved a good balance of knowledge: both of what CRM, as a platform, is capable of out of the box, and what you can do to develop further solutions for the application.\nAnd finally, believe in yourself This last tip may sound a little bit clichéd, but achieving your desire to become good at CRM Development is something that only you have control over. The journey may be hard, and you will often fail more than you succeed at first; but if you keep working at it, never give up and, most importantly, trust in yourself and your abilities, then you will succeed in increasing your knowledge and expertise in CRM.\n","date":"2016-08-28T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/whats-the-best-way-of-learning-crm-development/","title":"What's the best way of learning CRM Development?"},{"content":"One of the challenges when first working with CRM Plugins is understanding how the execution context works. To briefly summarise, this contains the information relating to how and when the plugin is executed. It can give you answers to some of the questions you may wish to \u0026ldquo;ask\u0026rdquo; your CRM - for example, what record has triggered this plugin? Which CRM User forced the plugin to execute? You may also assume that the context will contain all of the fields of that entity that you can then access. This is not always the case - typically, as part of an Update request, only the fields that triggered the update will be made available. Or consider an Associate/Disassociate request (i.e. when someone relates/unrelates two records in CRM). As part of a one-to-many (1:N) relationship, only one field has changed; so that is the only field that is contained within the execution context.\nThe above being the case, what options do you have if you need to access fields that are not included in the context? You have two choices available to you:\nObtain a reference to Organisation Service within your plugin, and then use a Retrieve request to return the fields you need from the plugin. As the context will always contain the GUID for the record that has changed, this is generally the most common way you will see this done. Utilise either a Pre or Post Entity Image of the record, which contains the attributes that you need to access. In the past, I would always end up going down the first route. Recently though, I have been evaluating Entity Images more and more, and have begun to actively use them as part of my plugins. Entity Images are, essentially, snapshots of the CRM record that are stored on the platform either before the database operation has completed or straight after - you can decide which one you want to use, depending on what your plugin is doing. They are configured as part of deploying your plugin to your target CRM instance, and the steps involved are actually quite simple - I think its generally the case that they are not well-understood or utilised by developers who are learning about CRM for the first time.\nSo why should you go to the extra effort to use them within your plugins? As alluded to above, using Pre Entity Images means you can get a snapshot of your CRM data before the data was changed. This may prove to be particularly invaluable in cases where you need to perform a before/after comparison on a particular field, and then execute a specific action either way. The only non-entity image way of doing this within CRM would be via hidden field that stores the current value of your field in question, which is referenced and updated once your business logic has completed. A slightly in-elegant solution, and one that is questionable, given that we can utilise Entity Images instead. Having ready access to the attributes which may not necessarily be exposed to the plugin when it is executed is particularly invaluable, given that this avoids a scenario where you would have to go down option 1). Disadvantages of this approach is the potential for unnecessary delays in your plugin completing and problems with your CRM instance as a whole, if your CRM\u0026rsquo;s web services are already accessed to breaking point.\nThere must be a catch, surely\u0026hellip; On the whole, it doesn\u0026rsquo;t look like it, but you do need to aware of a few things. As you may have already guessed, if your plugin runs on the Create message, you won\u0026rsquo;t be able to access any pre-image of the record; likewise, for Delete message plugins, the post-image will not be available. It should be fairly obvious why this is the case. You are also restricted with the number of Messages that support Pre/Post Images. The full list can be found here, but to summarise the most important Message types, only Update, Assign, SetState \u0026amp; Merge support both Image types. Bear in mind too the additional \u0026ldquo;gotchas\u0026rdquo;, which this article touches upon. If in doubt, then boot up your sandbox CRM environment and cry havoc with your test code.\nLets take a closer look at how Pre and Post Images can be implemented as part of a CRM Plugin\u0026hellip; The below example will compare the Pre and Post Image values of the Lead Company Name field and, if they have changed, send an email message to a Sales Manager user to alert them of this fact. Be sure to add references to the Microsoft.Xrm.Sdk and Microsoft.Crm.Sdk.Proxy .dlls from the SDK:\n//Extract the tracing service for use in debugging sandboxed plug-ins. ITracingService tracingService = localContext.TracingService; tracingService.Trace(\u0026#34;Implemented tracing service succesfully!\u0026#34;); // Obtain the execution context from the service provider. IPluginExecutionContext context = localContext.PluginExecutionContext; // Get a reference to the Organization service. IOrganizationService service = localContext.OrganizationService; if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;)) { //Confirm that Target is actually an Entity if (context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Guid _userID = context.InitiatingUserId; //Retrieve the name of the user (used later) Entity user = service.Retrieve(\u0026#34;systemuser\u0026#34;, _userID, new ColumnSet(\u0026#34;fullname\u0026#34;)); string userName = user.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;fullname\u0026#34;); Entity lead = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; Entity preLead = (Entity)context.PreEntityImages[\u0026#34;Image\u0026#34;]; Entity postLead = (Entity)context.PostEntityImages[\u0026#34;Image\u0026#34;]; string preCompanyName = preLead.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;companyname\u0026#34;); string postCompanyName = postLead.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;companyname\u0026#34;); tracingService.Trace(\u0026#34;Pre-Company Name: \u0026#34; + preCompanyName + \u0026#34; Post-Company Name: \u0026#34; + postCompanyName); if (preCompanyName != postCompanyName) { tracingService.Trace(\u0026#34;Pre-Company Name does not match Post-Company Name, alerting sales manager...\u0026#34;); //Queue ID for our Sales Manager Guid _salesManagerQueueID = new Guid(\u0026#34;41b22ba9-c866-e611-80c9-00155d02dd0d\u0026#34;); Entity fromParty = new Entity(\u0026#34;activityparty\u0026#34;); Entity toParty = new Entity(\u0026#34;activityparty\u0026#34;); //Email body text is in HTML string emailBody = \u0026#34;\u0026lt;html lang=\u0026#39;en\u0026#39;\u0026gt;\u0026lt;head\u0026gt;\u0026lt;meta charset=\u0026#39;UTF-8\u0026#39;\u0026gt;\u0026lt;/head\u0026gt;\u0026lt;body\u0026gt;\u0026lt;p\u0026gt;Hello,\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;Please be advised that I have just changed the Company Name of a Lead record in CRM:\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;Lead Record URL: \u0026lt;a href=\u0026#39;http://mycrm/MyCrmInstance/main.aspx?etn=lead\u0026amp;pagetype=entityrecord\u0026amp;id=%7B\u0026#34; + lead.Id + \u0026#34;%7D\u0026#39;\u0026gt;\u0026#34; + postCompanyName + \u0026#34;\u0026lt;/a\u0026gt;\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;Old Company Name Value: \u0026#34; + preCompanyName + \u0026#34;\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;New Company Name Value: \u0026#34; + postCompanyName + \u0026#34;\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;Kind Regards\u0026lt;/p\u0026gt;\u0026lt;p\u0026gt;\u0026#34; + userName + \u0026#34;\u0026lt;/p\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt;\u0026#34;; fromParty[\u0026#34;partyid\u0026#34;] = new EntityReference(\u0026#34;systemuser\u0026#34;, _userID); toParty[\u0026#34;partyid\u0026#34;] = new EntityReference(\u0026#34;queue\u0026#34;, _salesManagerQueueID); Entity email = new Entity(\u0026#34;email\u0026#34;); email[\u0026#34;from\u0026#34;] = new Entity[] { fromParty }; email[\u0026#34;to\u0026#34;] = new Entity[] { toParty }; email[\u0026#34;subject\u0026#34;] = \u0026#34;Lead Company Name Changed\u0026#34;; email[\u0026#34;directioncode\u0026#34;] = true; email[\u0026#34;description\u0026#34;] = emailBody; //This bit just creates the e-mail record and gives us the GUID for the new record... Guid _emailID = service.Create(email); tracingService.Trace(\u0026#34;Email record \u0026#34; + _emailID + \u0026#34; succesfully created.\u0026#34;); //...to actually send it, we need to use SendEmailRequest \u0026amp; SendEmailResponse, using the _emailID to reference the record SendEmailRequest sendEmailreq = new SendEmailRequest { EmailId = _emailID, TrackingToken = \u0026#34;\u0026#34;, IssueSend = true }; SendEmailResponse sendEmailResp = (SendEmailResponse)service.Execute(sendEmailreq); tracingService.Trace(\u0026#34;Email record \u0026#34; + _emailID + \u0026#34; queued succesfully.\u0026#34;); } else { tracingService.Trace(\u0026#34;Company Name does not appear to have changed, is this correct?\u0026#34;); return; } tracingService.Trace(\u0026#34;Ending plugin execution.\u0026#34;); } } Once we have written our code and built our solution, we deploy our plugin in the normal way via the Plugin Registration Tool - ensuring that we configure our step to fire on the correct message:\nNext, we need to configure our Pre/Post Images manually. Just because we have referenced them in our code above doesn\u0026rsquo;t mean they will automatically become available to us as our plugin is executed. Fortunately, adding them on is not too difficult and we can do it directly within the Plugin Registration Tool. First, highlight our new plugin step and select Register New Image:\nOne good thing about this is that we can configure both our Pre and Post Images in the one screen. We just confirm that our intended plugin step is selected, tick both of the boxes and ensure that the following details are completed:\nName: Specify a name for this image. This can be anything you want it to be, but I would recommend using the same value as Entity Alias Entity Alias: This is the name that is used in our code above to reference the image. This must match exactly against this in order for the code to work correctly. Parameters: Here you specify which attributes will be made available within the image. By default, all attributes are included, but you should specify only the attributes you need to work with in your plugin. For our example plugin, we only need the Company Name field, so this is the only attribute we will select. Your settings should look something like the below:\nWith everything setup, we can now test our plugin. In this case, I have changed one of the sample data leads to force the plugin to execute:\nWe can then see that our Email record is successfully created, which is able to reference the value of the Company Name field before the change:\nPlaying back the plugin within Visual Studio demonstrates that our Pre and Post images are Entity objects, meaning that we can interact with them the usual way - nothing new to learn or master, which is good 🙂\nConclusions - or Wot I Think Like most things with CRM, Entity Images have a time and place. If you have a desire to query additional data concerning record relationship attributes etc. as part of your plugin, then a Retrieve request is still going to be the only way you can accomplish this. There is also some additional administrative head-room required when working with Images - if, for example, you forget to specify your Pre \u0026amp; Post Images when deploying your plugin, then you are going to encounter immediate problems within your CRM. Having said that, I think there is an excellent case to be made to using Entity Images as much as feasibly possible in your code. A Retrieve request on a particular busy CRM platform just to return one attribute value could store up problems for your CRM instance in the long-haul; having this one value stored as part of an entity Image seems a much more sensible and logical approach. I am further struck by how useful entity images can be if you need to reference data changes before and after a record change - going back to our example above, where we would instead look at creating custom field within CRM to store this value, having it made available as part of an Entity Image could score a major time saving. Finally, you are not just restricted to using the one Pre/Post Image - you can setup multiple ones that contain different fields from your entity, that are accessed in different parts of your code, depending on your logic. I think developers working with CRM definitely owe it to themselves to check out Entity Images at least once - I think they will be pleasantly surprised at their capabilities and, one would hope, the increased performance of their CRM instance as a consequence of using them!\n","date":"2016-08-21T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/utilising-prepost-entity-images-in-a-dynamics-crm-plugin/","title":"Utilising Pre/Post Entity Images in a Dynamics CRM Plugin"},{"content":"When you first begin working with the Dynamics CRM SDK, there is a lot of specific terminology, concepts and methods that need to be grasped firmly. The importance of this is twofold: it enables you to judge the best approach when developing your bespoke solution that fits in within CRM and allows you to gain a very good understanding of the underlying CRM platform. The terms \u0026ldquo;early-bound\u0026rdquo; and \u0026ldquo;late-bound\u0026rdquo; are, arguably, the most important of these terms/concepts to fully understand, so it is useful to first explain what each of these refer to and the benefits \u0026amp; drawbacks of each, before we get into this weeks blog post properly:\nEarly Bound Early bound refers to when you are using a generated class file to access all of the customization data for your CRM within your code. The SDK Bin folder contains an executable file called CrmSvcUtil, which can be used to generate this file. There is a great MSDN article that goes through the steps involved as part of this; once you\u0026rsquo;ve got the file, simply add it into your Visual Studio project and Intellisense will automatically detect your entity, relationship etc. names! Suffice to say, going down the early bound route can significantly ease your development work, as your strongly-typed class file will contain everything you would require regarding your CRM entities. As such, you will be able to ensure that your code is accessing the correct attributes, relationships and other objects within CRM at all times.\nGiven that it makes a developers jobs ten times easier, why wouldn\u0026rsquo;t you want to use Early Bound in your code all the time? Your strongly-typed class file is essentially a snapshot of your current CRM instance, at the date and time when you ran the CrmSvcUtil. As soon as someone makes a change within CRM, your class file may no longer be correct and you may encounter major problems when executing your code against your target CRM environment. This problem can also be compounded if you are developing an ISV solution that is executed against a varied number of environments, where there could be entities/attributes present that your code has no reasonable way to anticipate; in this case, it becomes absolutely essential for you to consider using late-binding instead within your code.\nLate Bound Late binding is the exact opposite of early binding. To access your CRM entities via the late-bound route, you use the Microsoft.Xrm.Sdk.Entity class to declare the new or existing entity that you wish to work with within your code. Typically, you will need to have your CRM Solution open in front of you as part of using late-binding, so that you can reference the logical names of your entities \u0026amp; attributes. You are therefore not \u0026ldquo;restricted\u0026rdquo; in the same way that you are with early-binding - you can declare any possible logical name for your CRM objects that you want, which, as mentioned already, is essential if you are blind to the CRM instance(s) in question. There is also a performance benefit to using the late-bound Entiy class, depending on the number of records your code is working with. According to the following Microsoft best practice article:\nIn addition, if your custom code works with thousands of entity records, use of the Entity class results in slightly better performance than the early-bound entity types.\nOne of the major drawbacks of using late binding is the increased chance of errors in your code, as you will need to ensure that your logical names are typed correctly; any such mistakes, as we will demonstrate further below, may cause your code to fall over straight away and lead to hours of frustrating debugging in order to resolve.\nNow, to the heart of the matter\u0026hellip; I recently had a conversation with a developer colleague, who gave me some advice in relation to some plugin code I had written. Within my code, I had attempted to access the value of a Single Line of Text Entity attribute, using late-bound classes, in the following manner:\nstring myAttribute = myEntity[\u0026#34;myattribute\u0026#34;].ToString(); They recommended instead that I look at using the GetAttributeValue method instead, which in this case, would be expressed in the following manner\nstring myAttribute = myEntity.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;myattribute\u0026#34;); My colleague elaborated on a few reasons why this approach is preferential (which I will discuss at the end of this post), but I was interested in dissecting this further myself, to see how it works in practice. In my test CRM instance, I created a plugin for the Contact entity, that would fire on the Post-Operation event of the Update Message, for the First Name field. The plugin very simply accesses the value of this field using the two different approaches, which can be done like so:\n//First, initialize your Contact entity Entity contact = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; //Now we can access our attributes - the first way is like this... string contactField1 = contact[\u0026#34;firstname\u0026#34;].ToString(); //The second way... string contactField2 = contact.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;firstname\u0026#34;); When we play back the plugin execution in Visual Studio, we can see that this returns our expected values. In this particular example, we have attempted to rename our \u0026ldquo;Homer Simpson\u0026rdquo; Contact record to \u0026ldquo;Bart Simpson\u0026rdquo;:\nSo, at this stage, there does not appear to be a clear benefit of using one snippet over the other. But what happens if we attempt to access an attribute that does not exist? We can test this by adding the following lines of code to our plugin:\n//Next, we test what happens if there is a typo - first, we try GetAttributeValue string contactField3 = contact.GetAttributeValue\u0026lt;string\u0026gt;(\u0026#34;thiswillerror\u0026#34;); //Then, the alternate way string contactField4 = contact[\u0026#34;thiswillerror\u0026#34;].ToString(); When we use our GetAttributeValue approach, our contactField3 returns a null value\u0026hellip; :\nWhereas our contactField4 instantly causes an error, which would also be returned to the user within CRM\u0026hellip;\nWhere possible, we always want to try and prevent an Exception from being passed back to CRM and build in the appropriate error handling within our code, and this was one of the benefits that my colleague highlighted in this approach. So, for example, you can build in an if\u0026hellip;else statement as part of the above example that checks whether the GetAttributeValue is Null and then perform the appropriate action, depending on the result. Having spent time working with the method more closely, I can also now see a clear benefit in relation to code readability - it is definitely more obvious what GetAttributeValue is doing within a code example, compared with the alternative approach, as well as making it obvious what the data type of the attribute is within CRM. Keep in mind however that you still need to ensure that you are using your correct Data Types, both when creating your C# variables and referencing your CRM attributes - for example, you cannot return the contactid field as a string, as this will generate an InvalidCastException.\nTo finish off, I would invite you to look through this excellent article from Guido Preite, that takes a deep-dive look at the GetAttributeValue method in a variety of different scenarios. Definitely one to save to your bookmarks 🙂\n","date":"2016-08-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-benefits-of-using-getattributevalue-to-access-crm-attributes-late-bound-c/","title":"The Benefits of Using GetAttributeValue to Access CRM Attributes (Late-Bound, C#)"},{"content":"One of the nice things about working with lookup fields on entity forms is the ability to filter the results programmatically via a form level JScript function. The steps for doing this, thankfully, are rather straightforward:\nPrepare a FetchXML filter snippet that applies the type of filtering you want to perform. This can either be written manually, or you can build your filter within Advanced Find, download the FetchXML and then copy + paste the \u0026hellip; node. Access the control using the Xrm.Page.getControl object, and then use the addCustomFilter method to apply the filter Finally, setup your function to fire on the OnLoad event, using the addPreSearch method to actually apply the filter to the lookup control. If you are still scratching your head at the above, then fortunately there is an MSDN article on the subject which provides further guidance, including a handy example snippet that you can modify and apply straight to your CRM instance. What I like most about this method is that its application is simple, but its potential is quite huge. Utilising conditional logic within your Jscript function means that you could potentially filter results based on values on the form, via a web service request to another CRM record or based on the value in an external CRM/ERP system. Used correctly and prudently, it can help to make form level data inputting much more easier and context sensitive.\nCRM has a number of special field types that are reserved for certain system entities. One example is the Customer Data Type field. This is, in essence, a special kind of lookup control that spans two entities - Account \u0026amp; Contact. System customisers are able to create additional fields with this data type, but are unable to create a customer lookup control that spans across 2 entities of choice. This is a real shame, as I think this feature would be really welcome and be widely applicable to a number of different business scenarios.\nI was recently working with this particular field on the Contact entity in order to meet a specific requirement as part of a solution - for only certain Account records to appear and for no Contact records to be made available to select. One potential work-around would be to just create a new 1:N relationship between Account:Contact, but this seems rather unnecessary when there is already a relationship in place that could be modified slightly in order to suit requirements. I was therefore interested in finding out whether the addCustomFilter/addPresearch methods could be utilised in this case. After a small, yet sustained, period of severe head-banging, I was able to get this working; although the solution could be argued as being less than ideal.\nFor the above requirement, lets assume we have a custom field on our Account entity - Company Type - which we are using to indicate what type of UK company an Account is:\nOnce created, we then populate our CRM sample Account records with the value of \u0026lsquo;Limited Company (Ltd.) and run a quick Advanced Find to confirm they return OK:\nNow, for this example, we want to apply a very basic filter on the Customer field on our Contact entity to only show Account records that equal \u0026lsquo;Limited Company (Ltd.)\u0026rsquo;. Here is our \u0026ldquo;first attempt\u0026rdquo; JScript function(s) to achieve this:\nfunction onLoad() { Xrm.Page.getControl(\u0026#34;parentcustomerid\u0026#34;).addPreSearch(modifyCustomerLookupField); } function modifyCustomerLookupField() { fetchXML = \u0026#34;\u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt;\u0026lt;condition attribute=\u0026#39;jg_companytype\u0026#39; operator=\u0026#39;eq\u0026#39; value=\u0026#39;140250000\u0026#39; /\u0026gt;\u0026lt;/filter\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;parentcustomerid\u0026#34;).addCustomFilter(fetchXML); } After creating a JScript Library, adding it to a form, calling the onLoad() function as part of the OnLoad event and, finally (whew!), pressing the magnifying class on the control, we immediately get an error message:\nHere\u0026rsquo;s the error message:\nAs a next step, to try and resolve this, I remembered within the FetchXml \u0026hellip; node, you can specify the name of the entity that you want to filter on. This is typically required when you are using in order to join together multiple records. So I modified my JScript function to include the entity name within the fetchXML filter:\nfunction onLoad() { Xrm.Page.getControl(\u0026#34;parentcustomerid\u0026#34;).addPreSearch(modifyCustomerLookupField); } function modifyCustomerLookupField() { fetchXML = \u0026#34;\u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt;\u0026lt;condition entityname=\u0026#39;account\u0026#39; attribute=\u0026#39;jg_companytype\u0026#39; operator=\u0026#39;eq\u0026#39; value=\u0026#39;140250000\u0026#39; /\u0026gt;\u0026lt;/filter\u0026gt;\u0026#34;; Xrm.Page.getControl(\u0026#34;parentcustomerid\u0026#34;).addCustomFilter(fetchXML); } With fingers crossed, I then tried again\u0026hellip;but still got an error message, but a slightly different one. Some progress at least!\nOnce my head had returned from the desk in front of me, I looked back at the original error message, in particular the fact that the Contact entity does not contain the new field we have on our Account. This is to be expected, but is there some way we can \u0026ldquo;fool\u0026rdquo; CRM by also having a field with the same name on Contact, but which is not used for anything? Going back into my solution, I created a field with the exact same logical name on the Contact entity, making it clear that this field should not be used. The data type, description etc. does not matter, so long as the logical name is the same as our field above:\nGoing back to our form and trying again, we see that this looks to have done the trick - our three records are now returning successfully when we use the control! 🙂\nAnother good thing about this is that the custom filter will also apply when the user clicks on \u0026lsquo;Look Up More Records\u0026rsquo;; although the Contact entity can still be selected, no records will be returned for obvious reasons.\nI was glad this was ultimately possible to get working; however, to play devil\u0026rsquo;s advocate for a few moments, it is worth noting the following if you choose to set this up within your own environment:\nSimply creating a new 1:N relationship between Account:Contact would be a more straightforward and less technical approach of achieving the above; you should review your requirements carefully and consider whether this approach would satisfy your objectives. By adding form level JScript to this entity, you may start to impact on your form load times, particularly if you have lots of other functions running on the form or if you implement additional logic into your function. It is arguably not best practice to have a field within CRM that is being used in a \u0026ldquo;hacky\u0026rdquo; manner, as outlined above. If you are happy to have such a field within your CRM environment, then you will need to ensure that you take a number of steps to clearly label this field and its purpose within the CRM. The last thing you want is for someone to delete it by accident. If anyone else has found a better or alternative way of achieving the above, please let me know in the comments below!\n","date":"2016-08-07T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/resolving-link-entity-with-name-or-alias-account-is-not-found-error-when-using-xrm-page-getcontrol-addpresearch-jscript-function/","title":"Resolving 'Link entity with name or alias account is not found' Error When Using Xrm.Page.getControl().addPreSearch JScript Function"},{"content":"The ability to implement trace logging within CRM plug-ins has been around since CRM 2011, so it\u0026rsquo;s something that CRM developers should be well aware of. Writing to the trace log is useful for when a plug-in has failed or hit an exception, as within the ErrorDetails.txt file (available to download from the error message box window) will be a list of everything that has been written to the log, up to that point. One issue with this is, if a user encounters an error and does not choose to download this file, then this file is lost - not so much of an issue if the exception can be re-produced, but this may not always the case.\nFor those who are now working on CRM Online 2015 Update 1 or CRM 2016, then a new feature has been added which further expands this feature - the Plug-in Trace Log. Now, plug-in exceptions can be configured to write to a new system entity, containing full details of the exception, that can be accessed at any time in order to support retroactive debugging. The introduction of this feature means now is the best time to start using trace logging within your plugins, if you are not already. This weeks blog post will take a look at the feature in more detail, assessing its pros \u0026amp; cons and providing an example of how it works in practice.\nSo, before we begin, why you would want to implement tracing in the first place? Using tracing as part of CRM Online deployments makes sense, given that your options from a debugging point of view are restricted compared to an On-Premise Deployment. It is also, potentially, a lot more straightforward then using the Plug-in Registration Tool to debug your plugins after the event, particularly if you do not have ready access to the SDK or to Visual Studio. Tracing is also useful in providing a degree of debugging from within CRM itself, by posting either your own custom error messages or feeding actual error messages through to the tracing service.\nJust remember the following\u0026hellip; Writing to the tracing service does add an extra step that your plug-in has to overcome. Not so much of an issue if your plugin is relatively small, but the longer it gets, and more frequent you are writing to the service, means there is a potential performance impact. You should use your best judgement when writing to the service; not every time you do something within the plugin, but where there is a potential for an error to occur. Writing to the tracing log can also have an impact on your CRM storage capacity, something we will take a look at later on in this post.\nNow that we have got that out of the way, lets begin by setting up an example plugin! To start writing to the Tracing service depends on how you are implementing your plugin. If you have used the Visual Studio template, then simply use this line of code within the \u0026ldquo;TODO: Implement your custom Plug-In business logic.\u0026rdquo; section:\nITracingService tracingService = localContext.TracingService; Otherwise, you will need to ensure that your plugin is calling the IServiceProvider, and then use a slightly longer code snippet to implement the service. An example of the code that you\u0026rsquo;d need to use to setup this is as follows:\nusing System; //Be sure to add references to Microsoft.Xrm.Sdk in order to use this namespace! using Microsoft.Xrm.Sdk; namespace MyPluginProject { public class MyPlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { //Extract the tracing service for use in debugging sandboxed plug-ins. ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); } } } Once you\u0026rsquo;ve implemented the ITracingService within your code, you can then write to Trace Log at any time in your code using the following snippet:\ntracingService.Trace(\u0026#34;Insert your text here...\u0026#34;); Activating Tracing Even though we have configured our plugin for tracing, this does not automatically mean that our plugin will start writing to the log. First, we must configure the Plug-in and custom workflow activity tracing setting within the System Settings page:\nYou have three options that you can set:\nOff - Nothing will be written to the trace log, even if the plugin encounters an exception. Exception - When the plugin hits an exception, then a trace will be written to the log. All - Whenever the plugin is executed and the trace log is called, then a trace log record will be created. This is equivalent to Verbose logging. As mentioned earlier, individual records will be written to CRM whenever the tracing service is called. It is therefore recommended only to turn on \u0026lsquo;All\u0026rsquo; for temporary periods; leaving it on for \u0026lsquo;Exception\u0026rsquo; may be useful when attempting to initially diagnose plugin errors. Review the amount of storage available to you on your CRM Online/On-Premise deployment in order to determine the best course of action.\nTracing in Practice Now that we\u0026rsquo;ve configured tracing on our CRM and we know how to use the Tracing, lets take a look at an example plugin. The below plugin will be set to fire on the Post-Operation event of the Update message on the Account entity. It will create a new contact record, associate this Contact record to the Account and then populate the Description field on the Contact with some information from the Account record:\nprotected void ExecutePostAccountUpdate(LocalPluginContext localContext) { if (localContext == null) throw new ArgumentNullException(\u0026#34;localContext\u0026#34;);} //Extract the tracing service for use in debugging sandboxed plug-ins. ITracingService tracingService = localContext.TracingService; tracingService.Trace(\u0026#34;Implemented tracing service succesfully!\u0026#34;); // Obtain the execution context from the service provider. IPluginExecutionContext context = localContext.PluginExecutionContext; // Get a reference to the Organization service. IOrganizationService service = localContext.OrganizationService; if (context.InputParameters.Contains(\u0026#34;Target\u0026#34;)) { //Confirm that Target is actually an Entity if (context.InputParameters[\u0026#34;Target\u0026#34;] is Entity) { Guid contactID; string phone; Entity account = (Entity)context.InputParameters[\u0026#34;Target\u0026#34;]; tracingService.Trace(\u0026#34;Succesfully obtained Account record\u0026#34; + account.Id.ToString()); try { tracingService.Trace(\u0026#34;Attempting to obtain Phone value...\u0026#34;); phone = account[\u0026#34;telephone1\u0026#34;].ToString(); } catch(Exception error) { tracingService.Trace(\u0026#34;Failed to obtain Phone field. Error Details: \u0026#34; + error.ToString()); throw new InvalidPluginExecutionException(\u0026#34;A problem has occurred. Please press OK to continue using the application.\u0026#34;); } if (phone != \u0026#34;\u0026#34;) { //Build our contact record to create. Entity contact = new Entity(\u0026#34;contact\u0026#34;); contact[\u0026#34;firstname\u0026#34;] = \u0026#34;Ned\u0026#34;; contact[\u0026#34;lastname\u0026#34;] = \u0026#34;Flanders\u0026#34;; contact[\u0026#34;parentcustomerid\u0026#34;] = new EntityReference(\u0026#34;account\u0026#34;, account.Id); contact[\u0026#34;description\u0026#34;] = \u0026#34;Ned\u0026#39;s work number is \u0026#34; + phone + \u0026#34;.\u0026#34;; contactID = service.Create(contact); tracingService.Trace(\u0026#34;Succesfully created Contact record \u0026#34; + contactID.ToString()); tracingService.Trace(\u0026#34;Done!\u0026#34;); } else tracingService.Trace(\u0026#34;Phone number was empty, Contact record was not created.\u0026#34;); } } } After registering our plugin and with tracing configured for \u0026ldquo;All\u0026rdquo; in our CRM instance, we can now see our custom messages are being written to the Trace Log - when we both update the A. Datum Corporation (sample) record Phone field to a new value and when we clear the field value:\nMost importantly, we can also see that our test Contact record is being created successfully when we populate the Phone field with data 🙂\nNow, to see what happens when an error is invoked, I have modified the code above so that it is expecting a field that doesn\u0026rsquo;t exist on the Account entity:\nNow, when we attempt to update our Account record, we receive a customised Business Process Error message and window:\nAnd we can also see that the precise error message has been written to the trace log, at the point we specified:\nLast, but not least, for On-Premise deployments\u0026hellip; One thing to point out is, if you are using On-Premise CRM 2016 (both 8.0 and 8.1), then for some reason the trace log will not work if you do not run you plugin within sandbox isolation mode. I\u0026rsquo;m not the only one experiencing this, according to this post on the Dynamics CRM Community forum. Switching my test plugin to sandbox isolation resolved this issue. A bit of a strange one, and as Srikanth mentions on the post, it is not clear if this a bug or not.\nConclusions or Wot I Think Trace logging is one of those things where time and place matter greatly. Implementing them within your code obsessively does not return much benefit, and could actually be detrimental to your CRM deployment. Used prudently and effectively though, they can prove to be incredibly useful. The scenarios where I can see them returning the most benefit is if your plugin is making a call to an external system and, if an error is encountered during this process, you can use the Trace Log to capture and store the external application error message within CRM for further investigation. Trace logging can also prove useful in scenarios where an issue cannot be readily replicated within the system, by outputting error messages and the steps leading up to them within the Trace Log.\nIn summary, when used in conjunction with other debugging options available to you via the SDK, Trace Logging can be a powerful weapon to add to your arsenal when debugging your code issues in CRM.\n","date":"2016-07-31T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/implementing-tracing-in-your-crm-plug-ins/","title":"Implementing Tracing in your CRM Plug-ins"},{"content":"Anyone working within the CRM space will have already soaked up the news regarding Dynamics 365. Jujhar Singh\u0026rsquo;s announcement just a week before WPC 2016 helped to set the stage for Dynamics 365 \u0026amp; AppSource to steal the show as part of the conference, no doubt pleasing Dynamics professionals the world over. This is a major step forward for Microsoft, as they seek to quash the dominance of Salesforce, SAP \u0026amp; Oracle within the CRM/ERP space; having long been the underdog within this sector, Dynamics 365 could be what Microsoft needs to tip the scales in their favour.\nSo, if you are working with CRM closely at the moment, what does the above announcement mean for you? Is CRM still relevant or is it time to pivot across to some of the new offerings within Dynamics 365? In this week\u0026rsquo;s blog post, I take a closer look at what has emerged from the above announcement thus far, to see where the future may lie for CRM:\nIs CRM now an Enterprise-level product? At this stage, Microsoft are speaking in very high-level terms in respect to how the Dynamics 365 product will be licensed. Microsoft are now classifying CRM as an Enterprise-only product, a potentially concerning development. Some implications of this decision could be:\nPrice rise on all CRM licensing plans as part of Dynamics 365: I think in the short-term, it is incredibly unlikely that we will see a major jump in price (in order to help drive early adoption), but since CRM is now classed as an enterprise level product, it would be very surprising if pricing is not eventually adjusted, in order to take this into account. Rise in minimum license purchase for CRM Online: This is currently set to 5 Professional licenses, whereas the above slide indicates a minimum seat purchase of 20 users. If we assume the same price under Dynamics 365, the minimum monthly cost to start using CRM Online rises up to £810 ex. VAT, which equates to a whopping rise of £607.50 per month! [UPDATE: Since this post, I have had it confirmed that CRM Online minimum seat purchase will not be affected by Dynamics 365. More info can be found here. Big thanks to CRM MVP Jukka Niiranen for pointing this out!] The future of CRM within SMB\u0026rsquo;s: I am hoping that some further detail will help to clarify the position of CRM within the overall offer, but one of my major concerns at this juncture is the future position of CRM Online within small to medium size businesses. One of the huge benefits of the current structure is that is tailored for both extremes; a major shift of the CRM product towards an enterprise-only approach (i.e. in the sense that the price point precludes any other, smaller organisation from justifying the cost of adopting CRM) could spell the end of CRM within the small business. One would assume that Project \u0026lsquo;Madeira\u0026rsquo; would be the alternative offer for SMB\u0026rsquo;s in this instance; but the challenge would be in convincing these organisations to migrate across to this. Looking at this another way, Microsoft have promised that the Dynamics 365 offering will be flexible and adaptable for any kind of business. So it may well be the case that we see no major changes to how the CRM Online product is offered, something that I would welcome.\nTime to say goodbye to CRM On-Premise? One of the persistent questions, considering the increased focus on CRM Online in recent years and the Dynamics 365 announcement as a whole, is just where does On-Premise CRM sit in Microsoft\u0026rsquo;s long-term plans. Some colleagues I have spoken to recently have predicted that Dynamics 365 is the nail in the coffin for CRM On-Premise. I take a slightly more pessimistic view. With many large-scale organisations within the public/private sector still requiring the ability to literally point to a server rack in a data centre and say \u0026ldquo;This is where our data is stored!\u0026rdquo;, it will be difficult for Microsoft to convince them to migrate across to the Office 365/Azure platform, when such requirements may prove difficult to match. The UK is currently an excellent case in point, as we are still waiting on the promised arrival of UK based Azure data centres. Until Microsoft are in a position to offer commitments to every country in the world that they have data centres based within the country in question (or, in the case of Europe, within the European Economic Area), CRM On-Premise will continue to have a market for organisations who need specific assurances in regards to data storage and location.\nThe future of XRM The XRM framework is a developers skeleton key, in terms of unlocking further potential from CRM and extending it to suit specific business requirements. So does the Dynamics 365 announcement mean that this key is now useless? Microsoft have provided some re-assurance that XRM is not going away anytime soon\u0026hellip;\nTo extend the functionality of individual Dynamics 365 apps, partners may continue to use native application extensibility frameworks built-in to the CRM and the AX platforms.\nBut they have also indicated that there is a new sheriff in town\u0026hellip;\nThe common data model is a cloud-resident business database, built on years of experience with our enterprise customers. It will come with hundreds of standard business entities spanning both business process (Dynamics 365) and productivity (Office 365). The standardization and consistency of schema enables partners to build innovative applications and to automate business processes spanning the entire business process spectrum with confidence their solutions can be easily deployed and used across Microsoft\u0026rsquo;s entire customer base.\nSource: https://community.dynamics.com/b/msftdynamicsblog/archive/2016/07/06/insights-from-the-engineering-leaders-behind-microsoft-dynamics-365-and-microsoft-appsource\nClearly, the new \u0026ldquo;common data model\u0026rdquo; is something that CRM professionals are going to have to familiarise themselves with, as well having a general awareness of the other products sitting within Dynamics 365. One good thing to note, according to this blog post, is that there is some familiar terminology! I potentially see this as a positive step forward, particularly if your organisation is developing ISV solutions that sit within CRM and the constraints of CRM Solutions mean that you cannot leverage the desired functionality from the application.\nA Fork in the Road - How Dynamics 365 could lead to CRM Specialist Roles The Dynamics 365 announcement would also look to confirm that the Sales and Service side of CRM are starting to be treated as separate offerings, within one common environment. We\u0026rsquo;ve already seen that Microsoft have segregated out the Sales and Service side of CRM into different exams, and also that, when requesting a CRM demo, you can choose to have a Sales or Service focused trial experience. I can foresee a scenario where existing CRM professionals are having to \u0026ldquo;evolve\u0026rdquo; into one of three types of roles:\nCRM Service Specialists CRM Sales Specialists Dynamics 365 Specialists Without wishing to continue the references to Pokemon much further, we can then see situations where businesses are saying, for example, \u0026ldquo;CRM Service Specialist, I choose you!\u0026rdquo; for a particular project. Having focused roles along the lines of the above will undoubtedly lead to greater specialist knowledge of certain aspects of CRM, but could mean that professionals are no longer getting a good look over the garden fence at what\u0026rsquo;s going on within Dynamics 365 or within CRM itself.\nAlways Look on the Bright Side: Why Dynamics 365 could be awesome Dynamics 365 can be seen as yet another gesture of love and attention towards the Dynamics CRM product. CRM has developed in leaps and bounds in recent history; this announcement would appear to be the cherry on the cake, designed to convince organisations across the globe that having CRM within their business can precipitate major benefits and return on investment. Dynamics 365 also gives those working with CRM an excellent excuse to start familiarising themselves with the other products in the Dynamics family. Typically, despite the shared name, these products have stood in distinct isolation from each other. Bringing them together as part of Dynamics 365 will hopefully lead to greater shared knowledge and awareness of what products, such as NAV, can deliver. Finally, the common data model, if designed correctly, could offer a much effective and less restrictive means of configuring CRM to interact with other products/systems.\nThere will no doubt be more news on Dynamics 365 in the months ahead, culminating with its release as part of Summit 2016 in October. For now, we will have to wait patiently, but I am eager to get my hands on it, as Dynamics 365 will undoubtedly be an important string on the bow of CRM professionals in the years ahead.\n","date":"2016-07-24T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-future-is-dynamics-365-what-this-means-for-crm-onlineon-premise/","title":"The Future is Dynamics 365 - What This Means for CRM Online/On-Premise"},{"content":"I was working within CRM recently, attempting to configure some form level logic in order to display/hide fields, based on certain conditions on the form itself. I went into it rather gung-ho and immediately started writing the following JScript function:\nfunction hideFormFields() { var myField = Xrm.Page.getAttribute(\u0026#34;jg_myfield\u0026#34;).getValue(); if (myField = \u0026#34;Value1\u0026#34;) { Xrm.Page.getControl(\u0026#34;jg_myfieldtohide1\u0026#34;).setVisible(false); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide2\u0026#34;).setVisible(false); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide3\u0026#34;).setVisible(false); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide4\u0026#34;).setVisible(true); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide5\u0026#34;).setVisible(true); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide6\u0026#34;).setVisible(true); } else if (myField = \u0026#34;Value2\u0026#34;) { Xrm.Page.getControl(\u0026#34;jg_myfieldtohide1\u0026#34;).setVisible(true); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide2\u0026#34;).setVisible(true); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide3\u0026#34;).setVisible(true); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide4\u0026#34;).setVisible(false); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide5\u0026#34;).setVisible(false); Xrm.Page.getControl(\u0026#34;jg_myfieldtohide6\u0026#34;).setVisible(false); } } I then suddenly thought \u0026ldquo;Hang on - can\u0026rsquo;t this be done via a Business Rule instead?\u0026rdquo;. For CRM Administrators who do not have any previous development skills, Business Rules were a godsend when Dynamics CRM 2013 was first released, having been continually improved since. They essentially enable you to implement conditional logic and actions on your entity forms, without the need of writing a single line of code. Business Rules are, in fact, a form of JScript, utilising many of the capabilities available as part of the XRM framework. As an excellent case in point, we can very easily replicate the above code into a Business Rule, like so:\nThere is an important lesson here, that anyone who works extensively within CRM needs to always remember; if it can be done out of the box within CRM, then why would you go to the trouble and effort to write a JScript function? Here are some reasons why CRM Developers should always think twice before typing away on that new form-level JScript function:\nBusiness Rules actually do a lot more than you\u0026rsquo;d expect This is something I am guilty of forgetting, and I\u0026rsquo;m having to train myself to consider this more carefully in future. In the above example, I made the immediate assumption that showing/hiding form fields on a conditional basis was not possible via a Business Rule - how very wrong I was! This seems to underline a general theme with how CRM customisers/developers approach Business Rules, assuming a glass half-empty approach. As of 2016 Update 1, as well as being able to modify a fields visibility, Business Rules can also support the following scenarios:\nShow Error Message Set Field Value - Fields can be set to match the value of another field, a specific value (including lookups), the result of a formula (Date/Number fields only) or be cleared of data. Set Business Requirement. Set Default Value - Fields can be set to match the value of another fields, a specific value (including lookups) or as the result of a formula (Date/Number fields only). Lock or unlock field What emerges, when combined with the conditional logic builder, is a means of utilising some of the most frequently used methods available to you via the SDK. And we can also hope and anticipate that as part of future releases of CRM, this list will be expanded on further.\nBusiness Rules are supported, whereas your JScript may not be To summarise another way, although you can do lots of things via Jscript within CRM, that doesn\u0026rsquo;t mean that you should. One thing I have observed with those who work with CRM, but have had more of a developer background, is that there is tendency to utilise methods/code which is not officially supported by Microsoft within CRM. Looking around the CRM community, and you often see that an unsupported solution is provided to a specific requirement, but often with no warning label attached to stress what it is doing. A good example of something which is unsupported is manipulating the Document Model Object (DOM) of the current page. Just because it works now doesn\u0026rsquo;t mean it will in the future, potentially causing major problems for your CRM deployment. If you are ever in doubt, be sure to review the supported extensions article on MSDN for CRM, and always think twice before you start using that nice bit of code you find on a Google search.\nYour JScript may cause problems for end users So your JScript is supported - great! But that doesn\u0026rsquo;t necessarily mean it\u0026rsquo;s going to work well from an end users perspective. If your JScript code, for example, does not provide a sufficient level of error handling, your end users could end up receiving error messages frequently. Or perhaps consider a scenario where your code is handling your errors well and outputting them correctly for potential debugging; this can lead to more lines of code that are unnecessary, and cause an impact when running the application on certain machines/browsers. With Business Rules, you don\u0026rsquo;t have to worry about addressing that careful balancing act between form loading times versus JScript library sizes. And, you can be assured that the CRM application as a whole will be able to handle your custom logic without affecting the performance of your deployment.\nJscript should have a time, place and bespoke requirement What I mean by this is that JScript should only really begin to factor in when you are dealing with very specific, business requirements that the application \u0026ldquo;as-is\u0026rdquo; has no means of satisfying. An example of this is if you need to provide specific data validation to a field (such as a telephone number or bank sort code number). The only way this is ever going to be possible is via some kind of Regular Expression (RegEx) validation, which can only be performed via JScript. As one the key takeaways from this blog post, I would stress that you should always first attempt to build a Business Rule that is designed to meet the requirement that you are looking for, utilising other out of the box elements where appropriate (fields, workflows etc.). When it starts to become pretty obvious that a Business Rule is not going to cut it, then you can \u0026ldquo;give yourself permission\u0026rdquo; to start writing a JScript function 🙂\nA \u0026ldquo;No Code First\u0026rdquo; approach should always prevail I was recently introduced to this simple concept, but it is one that got me thinking and re-evaluating many of my previous solutions that I have built within CRM and elsewhere. When attempting to provide a potential solution to a requirement within CRM, you should try and put forward two solutions (where possible): a solution that can be achieved using out of the box functionality and one that requires bespoke coding to resolve. It may be that the first solution requires considerable time to build within CRM, but doing it this way could mean your solution is a lot more readable and understandable for those going into the system in future. By comparison, your 3-6 lines of code could prove to be virtually indecipherable.\nSo, in conclusion, can you guess what your new \u0026ldquo;Rule\u0026rdquo; should be moving forward? Going back to my earlier example of your CRM Developer, AKA Former Application Developer, person, you can anticipate that they will have a very good knowledge of what CRM is capable from an SDK point of view. What may be lacking is a good understanding and knowledge of application as a whole, and specific features (such as SLA\u0026rsquo;s, Business Process Flows etc.). I would invite these people, and in fact anyone who works extensively with CRM, to focus their learning efforts towards the functional side of the application, so that they can ensure they have a good grasp of what CRM is capable of doing with, potentially, just a few mouse clicks. And, finally, to always think twice and consider alternative solutions before deciding on the best option - for everyone involved.\n","date":"2016-07-17T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/why-crm-developers-should-use-business-rules-more/","title":"Why CRM Developers Should Use Business Rules More"},{"content":"I was having a chat with a colleague recently about the new Interactive Service Hub (ISH) in CRM, in particular the new entity forms that are used as part of this. One of the questions that came up was \u0026ldquo;From a form scripting point of view, do ISH forms let you utilise all of the same references, properties and methods available as part of a Main form?\u0026rdquo;. As is generally the case in these situations, we can turn to our good friends TechNet and MSDN for answers. Specifically, we are able to find out the following on MSDN:\nAll the form scripting events and methods supported by the CRM mobile clients (phone and tablet) are supported in the interactive service hub.\nIf your organisation is fortunate enough to be using CRM 2016 Update 1 or CRM 2016 Service Pack 1, then you can also utilise the following methods/events as well:\nAll the events and methods for sub-grids, except the ViewSelector methods. More information: Grid objects and methods (client-side reference) getId method in the Xrm.Page.data.entity namespace close, getCurrentControl, and Form Notification methods in the Xrm.Page.ui namespace DisplayState, Label, and setFocus methods for a tab (group of sections in a page) All the navigation item methods: More information: Xrm.Page.ui.navigation item (client-side reference) openEntityForm and openQuickCreate in the Xrm.Utility namespace setFocus and setShowTime methods for controls addCustomFilter and addCustomView for lookup controls getInitialUrl and getObject for only IFRAME controls; getSrc and setSrc for both web resource and IFRAME controls All the events and methods for the knowledge base search control. More information: Knowledge base search control (client-side reference) Source: https://msdn.microsoft.com/en-us/library/mt671135.aspx#Anchor_1\nSo what does this mean in practical terms and how should this factor into your decision to utilise Interactive Forms over standard CRM forms? Here are a few things to consider, as well as my own thoughts on the best way to proceed:\nIf you are on CRM 2016, then get yourself Update 1/Service Pack 1 ASAP There are a number of important new methods that can be used for interactive forms as part of this update. For example, you get much better options when it comes to working with IFRAME controls, interacting with the currently loaded record you are working with (via getID), loading a new record form (via openEntityForm or openQuickCreate) and setting focus to a particular field on a form (via setFocus). This goes above and beyond what is currently supported via the CRM tablet apps, and it is good to see that this CRM update has added these in. Having access to these methods may help to decide whether you can safely migrate or start using ISH forms within your organisation. And, as we have seen previously, the update process to SP1 for CRM 2016 is relatively straightforward, so why wouldn\u0026rsquo;t you look at upgrading?\nBe sure you familiarise yourself with Interactive Form debugging Anyone who works with CRM form scripting will be familiar with the trials and tribulations of browser debugging your scripts via Developer Tools on your browser of choice: open up your script library, set your breakpoints and then perform the actions needed to trigger your script. Try this with the new Interactive Forms, and you will notice that your custom library is not loaded onto the form. What the fudge?!?\nBefore panicking too much, the good news is that scripts can still be debugged, but you just have to go down a slightly different route. The CRM Team have written an excellent blog post that covers not just debugging for interactive forms, but for all forms as well. The first two options suggested seem to be the most straight-forward way of debugging interactive forms, but you can\u0026rsquo;t help but laugh at the fact that one of the suggested options is to use functionality within Google Chrome! 🙂 Joking aside, using Dynamic JScript in the manner outlined looks to be an effective way of setting breakpoints in a familiar manner. I may have to look at trying this at some point in the future.\nYou cannot programmatically switch to a different entity form in the Interactive Service Hub It is sometimes useful to change the currently loaded form that a user is presented with depending on certain conditions. For example, if a user enters certain data within a Lead form to indicate the record should be treated as high priority, you may want a new form to be opened that contains additional fields, subgrids etc. that need to be completed in order to progress the record accordingly. Within the Xrm.Page.Ui reference, we have two methods that help us achieve this objective: formSelector.getCurrentItem, which returns the GUID for the currently loaded form, and formSelector.items, which returns a list of GUID\u0026rsquo;s for all forms that the current user has access to. The bad news is that, because these two methods are not supported in the mobile app, you will also be unable to use them within the ISH. This is likely due to the fact that, similar to Mobile Forms, only one form is presented to user when using the ISH, which is dictated by the form order and the users security permissions.\nISH forms do not contain a Ribbon This may be one of the major reasons that prevent against an en masse adoption of ISH forms in the near future. One of the benefits of using the default CRM forms is the ability to modify the the CRM ribbon in order to add, remove or modify button behaviours across the application. This is great if you decide that you don\u0026rsquo;t want your users to have access to the Delete button, need to modify the order of the buttons to match the ones that are used the most or you need to setup a custom button that performs a web server call to an external system. Again, similar to the mobile application, there is no ribbon anywhere on the ISH, which means you may be missing out on key functionality going down this route.\nISH can only currently be used with a limited list of entities If you are hoping to get your CRM setup to use ISH for your Sales related entities, then don\u0026rsquo;t get too carried away at this stage. You are currently limited to using ISH with a specific list of CRM entities. The full list, courtesy of our good friends again, is as follows:\nAccount Contact Case All Activity Entities (System/Custom) Social Profile Queue Item Knowledge Article Custom Entities Expect this to change in future, as one of the things that I have previously highlighted as part of Update 1/Service Pack 1 was the ability to add Feedback onto any entity within CRM, including custom ones. It would make sense that the Sales side of CRM gets some love and attention to include Leads, Opportunities etc. as part of ISH (so we may not even call it this if that happens!).\nConclusions or Wot I Think It is clear that, whilst some of the headline benefits of using ISH are clear - visual experience, ability to drill down to individual records and efficient record filtering - , there are some very crucial and important technical limitations with the feature that need to be factored in before you decide to just completely migrate across to ISH. I would say that if your organisation has not been extensively customised to utilise form level scripting and other types of bespoke development work, then you can perhaps get away with making the jump. By contrary, you should be holding off and evaluating your existing customisations first to see if a) they are supported/unsupported by ISH or b) if there is some way to drop certain form scripting, ribbon customisations etc. in favour of using something default within CRM (for example, a Business Rule instead of a Jscript function). It could be that you can ditch a whole load of code that is not required as part of this exercise and instead utilise base functionality within CRM, something which is always preferable.\nOne thing I have been wondering about is the eventual aim with the ISH: is it intended that this will eventually replace the existing CRM user experience or will it continue to just be an alternative way of using CRM? It will be interesting to see how the feature is developed over the next couple of releases of CRM; one of the key indicators of this replacing the \u0026ldquo;old style\u0026rdquo; CRM forms is if we start to see all form scripting options made available in ISH and the introduction of the Ribbon.\nIn the meantime, make sure you do check out ISH at some stage, as the potential of setting up a visually attractive reporting and day-to-day application tool is huge. Just don\u0026rsquo;t let your Sales team see it yet, as you will end up disappointing them!\n","date":"2016-07-10T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/form-scripting-interactive-service-hub-and-other-considerations/","title":"Form Scripting, Interactive Service Hub and Other Considerations"},{"content":"I was interested to read on TechNet that it is now supported to perform a Multiple-Server role installation of Dynamics CRM 2016 on Windows Server 2012 R2 instances that are configured in Core installation mode:\nWith the exception of the Microsoft Dynamics CRM Help Server and Microsoft Dynamics CRM Reporting Extensions roles, you can install any Microsoft Dynamics CRM 2016 Server server role on a Server Core installation of Windows Server.\nSource: https://technet.microsoft.com/en-us/library/hh699671.aspx\nAt this stage you may be asking, \u0026ldquo;What is a Core installation of Windows Server 2012 R2?\u0026rdquo; or \u0026ldquo;Why would it be preferable to install CRM on a Core Server?\u0026rdquo;. Starting from Windows Server 2008, server admins can choose to install Windows Server without a GUI interface and without most of the common features that you would expect from a Windows Desktop environment. All you are able to see and interact with if you choose to log into a Windows Server core is the following:\nI would imagine that most novice service admins (like me!) would start panicking at this stage\u0026hellip;\nThe idea behind Core installations is that the server would be administered remotely from a server/desktop environment (using Server Manager, Powershell etc.), with it being really unlikely that you would need to logon or remotely connect to the server via remote desktop connection. As a Core installation is missing pretty much everything you would expect from a standard Windows working environment, it is potentially ideal for scenarios where you have limited resources within your virtualised server environment. Alternatively, if you are intending to deploy a resource-intensive application to the server (like Dynamics CRM!), ensuring that the application has access to as many system resources as possible could be desirable. More information about Core installations, including their supported roles/features, can be found on this helpful MSDN article.\nSo lets go through the process of setting up a Windows Server 2012 R2 Core Installation and then performing a CRM installation on the machine for all roles, with the exception of the ones highlighted above. We\u0026rsquo;ll take a look at some of the options available to us as part of a silent install and evaluate to see what the benefits are for an organisation to deploy On-Premise CRM in this manner:\nTo start off, we need to get our Windows Server 2012 R2 instance setup. The installation of this is really straight-forward and simple, so will not be covered in-depth. The only thing you need to note is that Server Core Installation option needs to be specified, as opposed to Server with a GUI, as you go through the required steps: Once your server is setup and you have logged in for the first time, you will need to get the server joined to the domain that will be used for your Dynamics CRM instance. Fortunately, this can be rather straightforwardly done in Server Core by using the SConfig command on the prompt window. Type in the following in the command prompt window and hit return: sconfig The command prompt window will turn blue and change to resemble the following (image courtesy of Technet):\nAt this point, type 1 and hit enter, following the instructions in order get the Server connected to your target domain. A restart will be required as part of this. An (optional) step before restarting is to also rename the Computer to something more description e.g. CRM-SERVER.\nOnce your server is on the Domain, you can disconnect from the Core server for now. In order to get underway with the installation, there is some prep work that needs completing first, which will require having the following files downloaded: Microsoft Dynamics CRM Installation executable (.exe) A configuration .xml file for the install. This is what the installer will refer to during the installation process for all of the required options. For this example, we will be using the following .xml file: \u0026lt;CRMSetup\u0026gt; \u0026lt;Server\u0026gt; \u0026lt;Patch update=\u0026#34;true\u0026#34;\u0026gt;\u0026lt;/Patch\u0026gt; \u0026lt;LicenseKey\u0026gt;XXXXX-XXXXX-XXXXX-XXXXX-XXXXX\u0026lt;/LicenseKey\u0026gt; \u0026lt;SqlServer\u0026gt;MySQLInstance\u0026lt;/SqlServer\u0026gt; \u0026lt;Database create=\u0026#34;true\u0026#34;/\u0026gt; \u0026lt;Reporting URL=\u0026#34;http://MySSRSServer/MyReportServer\u0026#34;/\u0026gt; \u0026lt;OrganizationCollation\u0026gt;Latin1_General_CI_AI\u0026lt;/OrganizationCollation\u0026gt; \u0026lt;basecurrency isocurrencycode=\u0026#34;GBP\u0026#34; currencyname=\u0026#34;GB Pound Sterling\u0026#34; currencysymbol=\u0026#34;£\u0026#34; currencyprecision=\u0026#34;2\u0026#34;/\u0026gt; \u0026lt;Organization\u0026gt;My CRM Organisation\u0026lt;/Organization\u0026gt; \u0026lt;OrganizationUniqueName\u0026gt;MyCRMOrganisation\u0026lt;/OrganizationUniqueName\u0026gt; \u0026lt;OU\u0026gt;DC=MyDomain,DC=local\u0026lt;/OU\u0026gt; \u0026lt;WebsiteUrl create=\u0026#34;true\u0026#34; port=\u0026#34;5555\u0026#34;\u0026gt; \u0026lt;/WebsiteUrl\u0026gt; \u0026lt;InstallDir\u0026gt;c:\\Program Files\\Microsoft Dynamics CRM\u0026lt;/InstallDir\u0026gt; \u0026lt;CrmServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVERr-A\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/CrmServiceAccount\u0026gt; \u0026lt;SandboxServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVER-SP\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/SandboxServiceAccount\u0026gt; \u0026lt;DeploymentServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVER-DW\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/DeploymentServiceAccount\u0026gt; \u0026lt;AsyncServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVER-AP\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/AsyncServiceAccount\u0026gt; \u0026lt;VSSWriterServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVER-VSSW\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/VSSWriterServiceAccount\u0026gt; \u0026lt;MonitoringServiceAccount type=\u0026#34;DomainUser\u0026#34;\u0026gt; \u0026lt;ServiceAccountLogin\u0026gt;MyDomain\\CRM-CRMSERVER-M\u0026lt;/ServiceAccountLogin\u0026gt; \u0026lt;ServiceAccountPassword\u0026gt;password\u0026lt;/ServiceAccountPassword\u0026gt; \u0026lt;/MonitoringServiceAccount\u0026gt; \u0026lt;SQM optin=\u0026#34;false\u0026#34;/\u0026gt; \u0026lt;muoptin optin=\u0026#34;true\u0026#34;/\u0026gt; \u0026lt;/Server\u0026gt; \u0026lt;/CRMSetup\u0026gt; Both files, once ready, should look something like this:\nThe next step, presumably the trickiest, is actually getting the installation files onto our Core server. As we saw, on the Core installation we have limited command line functionality, making it impractical to simply \u0026ldquo;copy \u0026amp; paste\u0026rdquo; the files across. Fortunately, Core installations are configured so that the root drive can be accessed over a network connection; meaning that we can drop our files anywhere on the C:\\ drive and access them via the cd command on the prompt window. To begin with, we will navigate to our computer using Windows Explorer on our other machine, replacing the JG-CRM with the name of your Core server: We can then navigate through and create a temporary folder on the drive root where we can move across all of our required files:\nFinally, because the CRM2016-Server-ENU-amd64.exe file is a self-extracting executable, we need to first extract all of the files into a temporary location on our non-Core server; then, ensure that all of these files are copied across to our Core Server, along with our config. xml file:\nWith everything copied across, we can begin the installation. Going back onto our Server Core, we first need to navigate to the location of our installation folder using the cd command: Next, we can run the CRM setup executable (SetupServer.exe), specifying the following required parameters:\n/Q - Quiet Mode installation\n/config C:\\CRMInstall\\CRMServerInstall_Config.xml - The file path to the config XML file that was copied across earlier\nOnce we hit return on the above, the command will be accepted and you will immediately be given control back to the command prompt window: At this stage, you may assume that something has gone wrong. But don\u0026rsquo;t panic - this is just a result of specifying a quiet mode installation. CRM will begin installing in the background and stop if any errors are encountered. Progress can be monitored by reviewing the CRM setup log files from our other server, by navigating to the currently logged in users AppData\\Roaming\\Microsoft\\MSCRM\\Logs folder:\nAssuming your installation has been completed successfully, the following Server Roles will now be available on your Server Core: Web Application Server Organization Web Service Discovery Web Service Asynchronous Processing Service Sandbox Processing Service Deployment Web Service The Help Server role will not have been installed, so therefore we must run the setup for this on another non-Server Core server. The Reporting Extensions role would need to be installed on the server that has your SSRS instance, so therefore it could not be installed on our Server Core server used above. I have also deliberately left out the Deployment Tools role as part of the above, which I would strongly recommend is installed on another Windows Server instance (for ease of access more than anything). The process is the same as part of any standard CRM installation and, therefore, will not be covered as part of this post.\nThoughts \u0026amp; Summary\nSome of the more obvious benefits of having a Server Core CRM deployment have already been hinted at. For example, I think it is definitely useful to be a in a position where you can ensure that your CRM deployment is completely free from competing with other Server resources; a Server Core installation allows you to achieve this aim quickly and easily, in a way that is fully supported by Microsoft. Installing CRM in this manner also presents some interesting opportunities for managed hosting providers, as the manner of deployment and installation (via quiet mode installation) could be easily automated in order to enable re-sellers to quickly roll-out hosted CRM instances to their customers.\nTaking a look at the other side of the coin, installing CRM via this route definitely requires some patience and perseverance. I encountered several issues during installation which cancelled the install entirely, due to incorrect Service Account Details, lack of account permissions etc. It will therefore take you a number of attempts before you can start to identify the common problems that need to be dealt with before commencing the silent installation, which would ideally need to be dealt with via PowerShell script automation or similar. I would also question why it is not possible to do a Full Install (minus Reporting Extensions) on a Server Core instance. My assumption would be that the Help Server \u0026amp; Deployment Tools rely on Windows Server features that are stripped out during a Core installation. If your ultimate aim is to minimise hardware costs by utilising Server Core, then having to factor in additional servers/hardware for an additional Windows Server on top of this seems excessive. You would then also begin to question why you would just not use a standard Windows Server instance to start with. Finally, there doesn\u0026rsquo;t appear to be any (obvious) means of specifying specific server roles to install as part of the silent installation. This kind of defeats the whole purpose of using Server Cores to scale out the workload of your CRM server across multiple Windows Servers.\nIn conclusion, should you be opting to have a CRM Server Core installation in the near future? I would say \u0026ldquo;not yet\u0026rdquo; for the simple reason that you cannot select specific Server Roles to install as part of installing CRM in this manner. In my mind, this eliminates one of the most obvious benefits of marrying together CRM and Server Core. Perhaps in future, when/if you can specify individual Server Roles in the config file (or if someone can correct me in the comments below!), this will become something that can be looked at more seriously. For now though, you can save yourself the hassle.\n","date":"2016-07-03T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/installing-dynamics-crm-2016-on-windows-server-2012-r2-core/","title":"Installing Dynamics CRM 2016 on Windows Server 2012 R2 Core"},{"content":"It is sometimes desirable to grant Send on Behalf permissions in Exchange for users who are accessing another mailbox in order to reply to email messages. Some typical scenarios for this may include a personal assistant who manages a company directors mailbox, a group of users who manage a central support mailbox or for ad-hoc scenarios, such as when a colleague is out of the office. Send on Behalf permissions provide the best level of courtesy when responding to an email by letting the person know that someone else is answering an email addressed to an individual, and I would say it is generally the more recommended approach compared to granting Send As permission.\nWithin Office 365/Exchange Online, we can very easily grant Send on Behalf permissions for a standard user mailbox (i.e. a user that has been granted an Exchange license on the Office 365 portal) via the user interface; just go into the mailbox in question, navigate to mailbox delegation and then simply add in the users who need the required permissions:\nThen, give it twenty minutes or so and then the user can then send as this user from the From field in Outlook successfully - nice and easy, just the way we like it 🙂\nBut what happens if we wanted to grant the very same permissions for a shared mailbox? Say that we had an IT support help desk with a shared mailbox that several different users need Send on Behalf permissions for. Within the Office 365 GUI interface, we have options only to grant Send As and Full Access permissions:\nSo, in order to accomplish our objective in this instance, we need to look at going down the PowerShell route. Microsoft enables administrators to connect to their Office 365 tenant via a PowerShell command. This will then let you perform pretty much everything that you can achieve via the user interface, as well as a few additional commands not available by default - as you may have already guessed, granting Send on Behalf permissions for a shared mailbox is one of those things.\nMicrosoft have devoted a number of TechNet articles to the subject, and the one found here is an excellent starting point to get you up and running. The salient points from this article are summarised below:\nEnsure that the latest versions .NET Framework and Windows PowerShell are installed on your 64 bit Windows machine. These can be added via the Turn Windows features On or Off screen, which can be accessed via the search box on your Windows Start Menu or in Control Panel -\u0026gt; Programs and Features -\u0026gt; Turn Windows features On or Off, Download and install the Microsoft Online Services Sign-In Assistant Download and run the Azure Active Directory Module for Windows PowerShell Once you\u0026rsquo;ve got everything setup, open up PowerShell and run the following script, altering where appropriate to suit your requirements/environment:\n# Remove result limits due to console truncation $FormatEnumerationLimit=-1 # Connect to Office 365. When prompted, login in with MSO credentials Set-ExecutionPolicy Unrestricted -Force Import-Module MSOnline $O365Cred = Get-Credential $O365Session = New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential $O365Cred -Authentication Basic -AllowRedirection Import-PSSession $O365Session -AllowClobber Connect-MsolService –Credential $O365Cred # Add additonal users to Send on Behalf permissions for mailbox. add= list if a comma seperate list. Each email address should be in double quoted brackets Set-mailbox \u0026#39;MySharedMailbox\u0026#39; –Grantsendonbehalfto @{add=\u0026#34;john.smith@domain.com\u0026#34;} # Confirm that user has been succesfully added to send on behalf permissions for mailbox Get-Mailbox \u0026#39;MySharedMailbox\u0026#39; | ft Name,grantsendonbehalfto -wrap # Display exit script (to keep window open in order to view the above) Read-Host -Prompt \u0026#34;Press Enter to exit\u0026#34; The nice thing about this code snippet is that you can grant multiple users Send on Behalf permissions at the same time, which is really handy.\nBased on my experience, the above has been a pretty regular request that has come through via support in the past. I am unsure whether this is common across different organisations or not; if it is, then I am really surprised that the Office 365 interface has not been updated accordingly. The important thing is that we can ultimately do this in Office 365, as you would expect via an on-premise Exchange Server. As such, organisations can be assured that if they are planning a migration onto Office 365 in the near future, they won\u0026rsquo;t be losing out feature-wise as a result of moving to the platform. And, finally, it is always good to learn about something new, like PowerShell, so we can also say that we\u0026rsquo;ve broadened our knowledge by completing the above 😉\n","date":"2016-06-26T00:00:00Z","image":"/images/Microsoft365-FI.png","permalink":"/grant-send-on-behalf-permissions-for-shared-mailbox-exchange-online/","title":"Grant Send On Behalf Permissions for Shared Mailbox (Exchange Online)"},{"content":"\nWhen I first took a look at some of the additions I was looking forward to as part of the CRM 2016 Spring Wave, I made reference to the new Email Signature feature. At the time, there did not appear to be any way of accessing this via the GUI interface within CRM; this is despite the fact there were, clearly, new system entities in the system corresponding to Email Signatures. There appears to have been some small update or change since my original post however, as it is now available within Online/On-Premise 8.1 CRM instances. 🙂 To take advantage of the new feature depends on what version of CRM you are running:\nIf you have CRM Online, then you will need to schedule in your Update 1 update via the CRM Online portal. Email notifications have already been sent out regarding this, and 2016 organizations should have already had provisional dates booked in for the update. More information regarding Online updates can be found in this handy and informative TechNet article. For CRM 2016 On-Premise customers, you can download the Service Pack update here. As discussed as part of a previous post, there is really no good reason not to update, thanks to how simple it is. Once you\u0026rsquo;ve finished updating, you are good to go. To then setup an Email Signature for your user account, you will need to do the following:\nNavigate to the Email Signature window within CRM. This can be accessed in either 1 of 2 ways:\nThe first is via the Set Personal Options screen, on the Email Signatures tab: The second is via the Sitemap Area, in Settings -\u0026gt; Templates -\u0026gt; Email Signatures: Regardless of how you have got there, press the New button to open the New Email Signature window:\nGive your signature a name and then populate the text area with your desired signature. You can make use of the rich text formatting in order to style your signature. Or, alternatively, you can copy \u0026amp; paste your signature from another application (Word, Outlook etc.): Once you are happy with your signature, press Save. At this point, the signature will now be available whenever you create a new email record. However, in order to make the signature appear automatically whenever you draft a new email, you will need to press the Set as Default button: If you need to revert this at any point, then you can use the Remove Default button, which replaces the above button:\nPress Save and Close to finish setting up your signature. It will now be visible within the Email Signature subgrid view: Now, when you navigate to create a new Email record, your newly created signature will be visible on the email: If, for whatever reason, you need to select a different Email Signature, then press the Insert Signature button, which will then prompt you to select a new Email Signature to use: I am really glad that this feature has finally been added to CRM, however\u0026hellip;\nThere appear to be three glaring issues, that really need to be addressed in order to make Email Signatures work better:\nEmail Signatures are only configurable on a per-user basis. What that translates to is that if one user creates an email signature and sets it as their default, another user can log in and see this, but cannot apply it to themselves or set it as their default; if the second user wanted an email signature, they would need to create one manually. The implications for this should be fairly obvious, and I find it somewhat confusing that there is not way to setup a common template that can be then be applied and customised individually for each user on CRM. There is no option, unlike Email Templates, to insert dynamic Data Field Values. So you cannot, for example, populate an e-mail signature based on information from the Job Title field on the currently logged in User account. This makes the feature impractical as a central Email Signature management tool; instead, you would have to go through the potentially rather tedious task of setting up Email Signatures for every user account on a system. Not so bad if you have a dozen or so users on your CRM instance, but if you have hundreds of users\u0026hellip;you get the point. Whilst recently studying for the CRM 2016 Service exam, I was really impressed to see that the Knowledge Article feature had been given a face lift in line with the new Interactive Service Dashboard. In particular, the text editing functionality has been improved significantly, with a range of new text editing options - many of which are not included as part of creating an Email Signature. Below is an image highlighting each of the new text editing features not available on Email Signatures, but available as part of the enhanced Knowledge Article functionality: As you can see, there are a number options as part of the above which would be incredibly useful from a Email Signature creation point of view - Insert Image, Font Colour and View Source (perhaps the most crucial, if your organisation uses HTML signatures). I wonder why, therefore, the new Email Signature feature was not modeled in the same vein as the above.\nConclusions\nPreviously, in an attempt to replicate email signature functionality, the recommended approach was to setup an Email Template. This is beneficial when it comes to larger CRM deployments, as the dynamic data fields functionality can be utilised when creating a common template. The introduction of the new Email Signature functionality does not, in my view, mean there should be a change to this approach. I think if your CRM deployment contains a small amount of users and you have a very simplistic, existing email signature, then you can perhaps get away with using this new feature without causing yourself too many problems. Until the above issues are addressed however, I would not recommend migrating away from what you are using currently to provide email signature functionality within your CRM. This is a real shame, as I was hoping the introduction of this feature would resolve some of the headaches that I have encountered previously working with complex email signatures in CRM. Fingers cross we see Email Signatures get a bit more love and attention as part of the next major release of CRM.\n","date":"2016-06-19T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/how-to-setup-email-signatures-dynamics-crm-2016-update-1sp-1/","title":"How to Setup Email Signatures (Dynamics CRM 2016 Update 1/SP 1)"},{"content":"The current talk around CRM at the moment is all about the Spring Wave AKA the CRM 2016 Update 1. Unlike Dynamics CRM 2015 Update 1, released at the same time last year, On-Premise customers can also take advantage of the latest update now by downloading Service Pack 1. Now it\u0026rsquo;s worth pointing out that some features, such as Guided Help and Field Service Management, are not made available to On-Premise customers at this time. Nevertheless, I think it\u0026rsquo;s a really positive step forward that On-Premise customers get most, if not all, of the latest updates as part of the Spring Wave, at the same time as Online customers. I\u0026rsquo;ve already talked previously about some of the great things to expect as part of the Spring Wave, so I would strongly recommend that organisations look at scheduling in their on-premise upgrade sooner rather than later. The main reason is the simplicity involved behind the actual upgrade process, which is the focus for this week\u0026rsquo;s blog post:\nHow to Download Service Pack 1 If you have enrolled your Dynamics CRM installation into Windows Update, then currently SP1 does not appear when your run Check for Updates on your CRM Server (this will change starting from Q3, according this knowledgebase article). Therefore, you will need to download the update manually via the following link:\nhttps://www.microsoft.com/en-us/download/details.aspx?id=52662\nThere are a number of install files available as part of the update. On first glance, it can be difficult to determine which update file relates to which component of your CRM installation, so I have provided a summary of this below:\nCRM2016-Client-KB3154952-ENU-Amd64.exe - Outlook Client (64 Bit) CRM2016-Client-KB3154952-ENU-I386.exe - Outlook Client (32 Bit) CRM2016-Mui-KB3154952-ENU-Amd64.exe - Outlook Client (64 Bit) Language Pack - English CRM2016-Mui-KB3154952-ENU-I386.exe - Outlook Client (32 Bit) Language Pack - English CRM2016-Router-KB3154952-ENU-Amd64.exe - Email Router (64 Bit) CRM2016-Router-KB3154952-ENU-I386.exe - Email Router (32 Bit) CRM2016-Server-KB3154952-ENU-Amd64.exe - Server Update CRM2016-Srs-KB3154952-ENU-Amd64.exe - Reporting Extensions There is also a final file - CRM2016-Tools-KB3154952-ENU-amd64.exe - which I believe may be some kind of utility to perform database upgrades. I have been unable to get this working successfully on my end and, from the looks it, the server update performs the database upgrade as part of the update. If anyone knows what this is for and how it works, let me know in the comments below.\nInstalling the Update The update is really simple - this is backed up by the fact that it can be completed in as little as 6 button presses!\nAfter running the self-extracting installer, you will be greeted with the first screen of the setup process:\nAs is always the case, accept the license agreement:\nThen, confirm that you are ready to begin the installation:\nThe installation process shouldn\u0026rsquo;t take long (about 10 -15 minutes when I performed it). Once complete, you\u0026rsquo;ll then be able to view the installation log file and specify whether or not you want to restart the server immediately:\nAs part of the update, your organisations and databases will be automatically updated to the latest version. So, no need to manually perform this yourself after the update.\nReporting Extensions Update The only thing to point out with this is that this will need to be installed on the same machine as your CRM\u0026rsquo;s SSRS instance. Depending on your On-Premise deployment, your SSRS instance may be installed on a different server. The actual process of installing the update is identical to the above. Fortunately, however, a server reboot is not required 🙂\nOutlook Client Update Updating the Outlook client is also very similar and straightforward. Any existing connections will still work after the update. You will also need to install the update(s) for any currently installed language pack(s) immediately after this update is complete. In most cases, this will just be the one update for your CRM organization\u0026rsquo;s base language. Depending on the size and complexity of your CRM deployment, you may need to plan carefully to ensure a successful roll-out; you may be happy to note that the existing CRM client should still work with a 8.1 CRM instance, so you can always choose to defer this part of the update entirely.\nThe Testing Conundrum As with any application update, it is always prudent to ensure that you have performed some kind of testing. This can be invaluable in identifying issues that can then be resolved in good time, without causing disruption to colleagues/end-users of the application.\nThe difficulty in this particular case is that the update appears to automatically update all organizations on a CRM instance in one fell swoop. So you cannot, for example, setup a copy of your primary CRM instance that you can perform a test upgrade on. You would therefore have to deploy an entirely separate CRM instance which you can perform the necessary testing on. You can use a trial key to cover the initial licensing hurdle, but you will still need to allocate hardware and put time aside to get your temporary testing environment setup.\nGiven that this release is not a \u0026ldquo;major\u0026rdquo; release of the application, I would argue softly that you can probably throw caution to the wind and perform the update with a minimal amount of testing, particularly if your CRM instance has not been significantly customised/extended. There does not appear to be anything majorly changed under the hood of CRM for this release that could cause problems (unlike the 2015 Spring Wave release for CRM Online). In order to best inform your decision on this, you should consider the following factors:\nAre you able to schedule your CRM update over a weekend or extended out of hours slot, and have resources in place to perform any post-update testing? If the answer is yes, then you may be able to get away with not performing any pre-update testing. What is the maximum amount of time your CRM instance can be taken down for during normal working hours? Does your CRM utilise the Email Router, CRM for Outlook and/or multiple language packs? If yes, then performing upgrade testing of these components may be beneficial. Does your business have a spare Windows Server instance that can be used for a test upgrade, without incurring additional cost to the business? If yes, then a major impediment has been eliminated, meaning that you should look at performing a test update. Has anyone performed an On-Premise upgrade to SP1 yet? Let me know your experience and comments below!\n","date":"2016-06-12T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/installing-dynamics-crm-2016-sp1-on-premise/","title":"Installing Dynamics CRM 2016 SP1 (On-Premise)"},{"content":"As part of a recent project I have been involved in, one of the requirements was to facilitate a bulk data import process via a single import spreadsheet, which would then create several different CRM Entity records at once. I was already aware of the Bulk Data Import feature within CRM, and the ability to create pre-defined data maps via the GUI interface; what I wasn\u0026rsquo;t aware of was that there is also a means of creating the very same data maps via the SDK or an .xml import. This was a pleasant surprise to say the least and, from the looks of it online, there is very little in the way of resource available for what is, arguably, a very powerful out of the box feature within Dynamics CRM. Expect a future blog post from me that dives a bit deeper into this subject, as I think the capabilities of this tool could fit a variety of differing requirements; if only the instructions within the SDK were a bit more clearer when it comes to working with the raw data map .xml!\nWhilst attempting to perform a proof of concept test, I uploaded a data map .xml into CRM, which contained references to four different entities. When I then attempted to run the data import using a test file, I encountered a strange issue which meant I could not proceed with the import. This occurred when CRM attempted to map the record types specified as part of the data map:\nCan you spot the two issues?\nThe first, more obvious, problem is the yellow triangle on the 3rd entity option and the fact that the Record Type is empty. Whereas the 3 other options will display clearly the Display Names of the entities (e.g. Account, Lead), the Display Name of the problematic entity is not visible at all.\nThe more eagle-eyed readers may also have spotted the second, more serious issue; the next button is grayed out, meaning we have no way in which to move to next step as part of the data import. With no means of figuring out, exactly, what may be causing the problem based on the above screen, you can potentially expect a long-haul investigation to commence.\nAs it turns out, the issue was both simple and frustrating in equal measures. It turns out that because the third entity on the list of the above had the exact same Display Name value as another entity within the system, CRM got confused and was unable to display and map with the correct entity. So, for example, lets say you create a custom entity to record information relating to houses, called \u0026ldquo;Property\u0026rdquo;, with a Name of \u0026ldquo;new_property\u0026rdquo;. Without perhaps realising, CRM already has a system entity called \u0026ldquo;Property\u0026rdquo;, with a Name of \u0026ldquo;dynamicproperty\u0026rdquo;. It doesn\u0026rsquo;t matter that the Names of the entities are completely different; so long as the Display Names are, then you will encounter the same issue as highlighted above. So, after renaming the entity concerned with a unique Display Name (don\u0026rsquo;t forget to publish!), I was able to proceed successfully through the data import wizard.\nSo why is this frustrating? If, like me, you have worked with CRM for some time, your first assumption would be that the system would always rely on the Name of objects when it comes to determining unique entities, attributes etc. The above appears to be the exception to the rule, and is frustrating for the fact that it throws out of the window any assumed experience when it comes to working under the hood of CRM. Perhaps it was (fairly) assumed that, because the logical names must always be unique for entities, that the data map feature could just use the Display Name as opposed to Name when attempting to map record types. I would expect that CRM is performing some kind of query behind the scene at the above stage of the Data Import wizard, and that the Display Name field value is included as parameter for all entities included in the data map. In summary, I would suggest that this is a minor bug, but something that would be difficult to encounter as part of regular CRM use.\nHopefully the above post will help prevent anyone else from spending 6+ hours from figuring out just what in the hell is going on. 🙂 As a final side note, what this problem and solution demonstrates is an excellent best practice lesson to avoid; be sure to provide distinct Display Names to your custom entities.\n","date":"2016-06-05T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/the-curious-case-of-the-data-import-issue-dynamics-crm/","title":"The Curious Case of the Data Import Issue (Dynamics CRM)"},{"content":"In anticipation of the Spring Wave release of CRM (which I am really excited about), I have been doing some research into ADXstudio and CRM Portals. As you may already know, Microsoft bought ADXstudio last year and aims to tightly integrate the product under new branding, as an add-on for existing CRM Online customers and a volume license product for on-premise CRM deployments. CRM Portals are shortly going to become more and more relevant in the months ahead, to the point of which they will be crucial for any serious CRM developer/consultant; my principal aim of the above is to get a flavour relating to the products background and Microsoft\u0026rsquo;s long term direction resulting from the acquisition.\nWhilst reading the Microsoft Acquisition FAQ on the Adxstudio website, I stumbled upon this rather interesting Q\u0026amp;A:\n9. Are there plans for an Adxstudio certification program?\nAdxstudio certification will be a module that CRM Partners will certify on as part as an all-inclusive Dynamics Service Certification (Target launch July 2016)\nAlthough vague in nature, there is enough detail there for us to assume that there will be some changes to the existing exams currently covering the CRM product this year - either modifications to an existing exam, an exam overhaul/replacement or even a brand new exam altogether. Here\u0026rsquo;s why I think it is incredibly likely to expect at least 1 new CRM exam before the end of the summer, and why this is, arguably, a good thing:\nI don\u0026rsquo;t speak with much authority on this first point unfortunately, but taking a cursory glance over the ADXStudio product suggests that there is a lot to learn and cover in order to implement and maintain a portal for your CRM instance. I therefore think it would prove difficult to incorporate the entire lifecycle of CRM Portals as part of an existing exam; indeed, the trend seems to suggest a move towards more specialised exams, given that the MB2-704 exam was, effectively, split into 2 new exams as part of the CRM 2016 release. Having a singular exam that does not distract itself with the breadth of other features within CRM would seem to be the best approach to ensure that professionals fully understand how to use the product. Having a dedicated exam, plus an accompanying Microsoft Learning course, for CRM Portals makes sense as it not necessarily a foregone conclusion that everything will currently be using ADXstudio as part of their CRM deployment. Granted, it has been incredibly popular amongst CRM Partners across the globe. But it is still likely that most CRM professionals would have only at least heard of ADXstudio previously, having no direct experience using the product. By putting in place an exam and Microsoft Learning course for the product, Microsoft will enable those people to dive into the product quickly and effectively and, by association, drive sales of the product in the years ahead. Microsoft have recently announced a price hike for all Microsoft Certification exams, which is expected to take effect for any exam booked globally after June 30th. It might just be a coincidence, but I can see the sense of holding off on releasing any significant new exams until the price hike takes effect. It\u0026rsquo;s just good business after all 🙂 To the chagrin of some within the CRM Community, there has been no new updated exam to replace the MB2-701 Extending Microsoft Dynamics CRM 2013 (although, strangely, updated Microsoft curriculum was released to Microsoft training providers for CRM 2015). This is something which I have been hotly anticipating myself for the past 6 months, given the number of changes involved as part of 2015\u0026rsquo;s Update 1. One would hope that, in line with Spring Wave release, an Extending Microsoft Dynamics CRM 2016 exam hits in the near future, so that CRM developers can demonstrate their competencies in the latest release(s) of CRM. Perhaps I am being overly hopeful in anticipating a new exam; it could just be that the existing MB2-714 exam is simply updated to incorporate new content relating to CRM Portal. I think this could be a damaging oversight though, and could have long-term implications in regards to driving adoption of the product. It is incredibly important (one would assume) that Microsoft is able to record as many CRM Portal subscription add-on/volume license purchases, as quickly as possible, for the new product; putting out an olive branch to those within the CRM community who have not yet taken the dive with ADXStudio is an important consideration that can only help achieve the assumed aims of Microsoft\u0026rsquo;s acquisition last year.\n","date":"2016-05-29T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/crm-portal-exam-imminent/","title":"CRM Portal Exam Imminent?"},{"content":"I recently took the plunge and bought an annual Visual Studio Professional subscription. I was already aware that organisations could purchase these subscriptions through a volume license agreement, and also knew about some of the benefits available as part of a subscription. As I begin to immerse myself more and more with Dynamics CRM and how the platform integrates with an increasing array of different Microsoft products, I thought having a subscription would be a good way to keep continually up to date with what is happening in the world of Microsoft. I\u0026rsquo;ve spent a couple of weeks now utilising the various benefits on offer as part of a subscription, and I am now fairly convinced that anyone who considers themselves to be a serious Microsoft professional should have a subscription, either on an individual basis or through the company they work with. Here\u0026rsquo;s just a few of the things that I have found beneficial about having a subscription, and why each one makes an important difference:\nAzure Credits Azure, I hope, needs no introduction for those who are currently working within the cloud space. I have been pushing myself towards fully understanding everything that Azure can offer, as the Dynamics CRM product is increasingly starting to offer various types of add-on/integrations with the platform (in particular, machine learning, which is something I am very interested in at the moment). The nice thing about having an (annual) Visual Studio Professional subscription is that you get £35 ($50) worth of Azure credits each month that you can use however you wish. The amount may not seem to be much, but with this you could very easily pay for a development virtual machine, a 250GB SQL database or multiple Basic 2GB SQL databases. Or alternatively, you can use the credit to help subsidise some of the more expensive of the previous options to help create your own personal Azure workbench, website etc. It\u0026rsquo;s looking more and more likely that CRM professionals will need to have a solid grasp of the Azure platform, what it can offer and how it relates to CRM in the months ahead; having these free credits gives you an excellent excuse to dive straight in.\nFree Dev Software Perhaps one of the major selling points of a subscription is access to development use only versions of Windows, SQL Server and other Microsoft products. Thanks to the license keys and software on offer, I have been able to successfully setup up my own development server \u0026ldquo;farm\u0026rdquo; through the use of Hyper-V - in the process, learning more about the setup and administration of Windows Server 2012 R2, Active Directory and the many other server roles that are required as part of a domain deployment. This is invaluable experience that is almost impossible to replicate via book reading, watching videos online etc. and I would encourage both individuals and organisations to make the appropriate investment in a dedicated development/test environment as an absolute minimum requirement if you are intending to offer some kind of IT provision. One drawback is that the really good software (*coughDynamicsCRMServercough*) is sadly not available as part of the Professional subscription. To get your hands on this, then you would need to look at purchasing an Enterprise subscription, which currently knocks in at a whopping £1,832.09 ($2,999). This is clearly a deliberate choice on behalf of Microsoft, given that you have the likes of enterprise-grade applications like Exchange and SharePoint up for grabs.\nTraining Courses One of the surprise benefits I found of having a subscription is access to the Imagine Academy e-learning site. Imagine Academy is geared towards usage by higher education providers as a supplement to any academic courses they may be offering instruction in and gives you access to online versions of most (not all, by the looks of it) of the official courses on offer by Microsoft for Windows, SQL Server, Dynamics and Azure. The courses are a mixture of module-based videos, with some multiple choice knowledge checks and interactive demonstrations (depending on the course you are working through). Whilst I will be quick to admit that these courses are no real substitute for their paid versions within a classroom/lab environment, I have been able to successfully use the courses on offer as a useful revision tool to recently pass the MB2-714 - Microsoft Dynamics CRM 2016 Customer Service exam, something which I was dreading as a result of the splitting out of these exams as part CRM 2016.\nXamarin Along with some of the other high profile acquisitions made within the last 12 months by Microsoft, it was announced earlier this year that Xamarin had now joined this list as well. As Microsoft begin to take a much more agnostic approach when it comes to which platforms their key applications can run on, the acquisition of Xamarin cements this further by encouraging C# developers to begin to modify their applications so that they can be run on iOS, OS X and Android, without the hassle or effort you would typically expect. Whilst Xamarin is available without the need to purchase a Visual Studio subscription, what you do get is access to some of the \u0026ldquo;getting started\u0026rdquo; resources as part of Xamarin University, in order to help developers get to grips with everything the platform has to offer. What\u0026rsquo;s also good with Xamarin is that you can download it for OS X and write C# code from there; something which I am still trying to get my head around and not something you would have expected to say 10 years ago! I am really interested by the potential that Xamarin has in helping to increase application portability, using a programming language that I have found to be incredibly versatile to help deploy apps onto my preferred mobile operating system, iOS.\nDoes anyone else have a list of what they think are the best benefits of a subscription, or general feedback on their Visual Studio subscription? Let me know in the comments below!\n","date":"2016-05-22T00:00:00Z","image":"/images/Microsoft-FI.png","permalink":"/why-all-microsoft-professionals-should-have-a-visual-studio-professional-msdn-subscription/","title":"Why all Microsoft professionals should have a Visual Studio Professional MSDN Subscription"},{"content":"The introduction of Excel Online within Dynamics CRM was one of those big, exciting moments within the applications history. For Excel heads globally, it provides a familiar interface in which CRM data can be consumed and modified, without need to take data completely off CRM in the process. It is also a feature that can easily be used by CRM Administrators in order to perform quick changes to CRM data. It\u0026rsquo;s also one of the benefits of using CRM Online over On-Premise, and probably something I should of included as part of my previous analysis on the subject.\nAs great as the feature is, like anything with CRM, it is subject to occasional issues in practice; particularly if you use bespoke security roles as opposed to the \u0026ldquo;out of the box\u0026rdquo; ones provided by Microsoft. We encountered an issue recently where one of our colleagues had a problem importing modified data from Excel Online back into CRM. Our colleague had no problem opening the data in Excel Online, with the problem only surfacing when they clicked the Save Changes to CRM button. The rather lovely looking error message looked something like this:\nUnhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: Principal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)Detail:\n-2147220960\nPrincipal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)\nWhen approaching any type of error message for the first time, it can be quite daunting figuring out what it is saying. Fortunately, in this case, there is only one line we really need to be concerned about, which is the \u0026hellip;. To translate to plain English, this line:\n\u0026lt;Message\u0026gt;Principal user (Id=0c6ff908-a6c9-e511-8144-c4346bac5e0c, type=8) is missing prvReadImportFile privilege (Id=fe46d775-ca5c-4a09-af93-99a133455306)\u0026lt;/Message\u0026gt; Means:\nI cannot complete this action for Joe Bloggs, because they are missing the Import Source File Read privilege!\n(Note: To help translate the above, I made use of the Security role UI to privilege mapping table on MSDN - a handy link to have in your browser favourites)\nGiving the user just this privilege did not resolve the issue, producing a completely different error message in the process. Rather then spend an inordinate amount of time replicating the action over and over again, we did a quick Google search to see if there if we could find a list of the minimum level of permissions required in order to complete We were directed towards this forum post, with an answer from CRM MVP Jason Lattimer on what permissions were required in order to resolve the error message:\nRequired permissions:\n* Data Import (all) * Data Map (all) * Import Source File (all) * Web Wizard (all) * Web Wizard Access Privilege (all) * Wizard Page (all)\nProblem solved you\u0026rsquo;d think? Well, unfortunately, in this case not. Although at first we thought that things were working fine, as no error message cropped up. When we then monitored the data import job in background, however, it was stuck at Parsing. At this point, we were really beginning to struggle to think of how to resolve the problem. It was at that point we rather desperately took a look at other, successfully completed, System Jobs to see if there was anything obvious we could observe. We noticed that successfully completed Data Import jobs had 3 System Job Tasks associated with them, whereas our stuck had only 1. At this stage, we asked: Could we be missing privileges on the System Job entity? And, lo and behold, when we took a look at the users security role, there were no privileges configured for System Jobs. After a bit more trial and error, adding permissions one by one onto this role, we saw that the Data Import job ran successfully!\nSo just to confirm for those who may encounter the same problem in future, the full list of permissions required to get Excel Online Data Import working successfully are the ones highlighted above and the following additional privileges too:\nCustomization Tab System Job Create: Business Unit Level Read: Business Unit Level Write: Business Unit Level Append: Business Unit Level Append To: Business Unit Level Assign: Business Unit Level e.g.\nWhat this problem (and the solution) I think demonstrates is the best way in which to approach day-to-day problems that may crop up within CRM:\nIt is reasonable to assume from the outset that the problem is due to a lack of security permissions problems; try not to over complicate matters early on by assuming it could be something completely different. For example, if the task or action that you are trying to perform CRM can be performed using an account/security role with greater permissions, then this will tell you straight away what the problem is. Having good \u0026ldquo;under the hood\u0026rdquo; knowledge of CRM is always helpful in a scenario like this, but this may not always be possible for those of you work with CRM sparingly. To help you in this scenario, we can refer to some of the fantastic resources online by the CRM community. Ben Hosking has a great blog post on the importance of thinking in entities when it comes to working within CRM, something which I think applies great in this particular example. By understanding that the System Job is a system entity, we can logically assume that it has its own set of required permissions. In diagnosing the issue in this case, we were able to refer to some of the previous System Jobs records in the system. As part of this, we observed that a System Job that completed successfully had 3 job records related to it. This immediately told us that there was something wrong with the users access to the entity in question (going back to the above, System Job is, after all, a system Entity). Sometimes, being able to understand the difference between an action, when it works and doesn\u0026rsquo;t work, can give you the information you need to make the logical next step jump on what to investigate further. And, last but not least, access to and the ability to use a search engine is always very helpful 🙂 ","date":"2016-05-15T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-crm-data-import-job-stuck-at-parsing/","title":"Dynamics CRM Data Import Job Stuck at Parsing"},{"content":"\nIt was a very nice post-bank holiday treat in the UK to be treated to news on Tuesday morning that the CRM 2016 Spring Wave has now been released, and can now be accessed as part of a Sandbox Reset or a new CRM Online Trial. 😀 Microsoft will be targeting organisation updates on Office 365 tenants starting from the 1st week of June, and it\u0026rsquo;s a better time then ever to upgrade to CRM 2016 if you are still using 2013/2015 within your Online environment.\nSo what can we look forward to as part of the new release? I have taken a look at what\u0026rsquo;s new (both obvious and under the hood), and have highlighted the 5 new features that I am most looking forward to as part of the new release:\nCharacteristics I\u0026rsquo;ve been getting myself more and more familiar with the service side of CRM (partly for exam preparation, and also because the new interactive service hub looks really cool!) and I have been impressed with the breadth of options that are available within CRM for service management, scheduling and resource management. One thing that is lacking though is some easy method of recording detailed information regarding a particular resource. For example, if you have a Network Engineer that you need to schedule out for a site visit and you need to ensure that he has a particular certification or qualification in order to complete the job, there is no (easy) way to record this against the resource in CRM.\nI am therefore pleased to see that new entities called Characteristics \u0026amp; Bookable Resource Characteristics have been added as part of this CRM release, with the following descriptions:\nCharacteristic - Represents the skills, education, and certifications of resources.\nBookable Resource Characteristic - Associates resources with their characteristics and specifies the proficiency level of a resource for that characteristic.\nThe entity has fields that records a Characteristic Type and Rating Value, which would seem to be perfect for scenarios similar to the example provided above.\nUnfortunately, after digging around in a trial version of the Spring Wave update, I cannot seem to find any obvious means of accessing these entities via the user interface. I assumed that you would be able to add this information from the User form, but no dice. Perhaps this entity has been added in preparation for the next big release of CRM, but there is nothing (in theory) stopping system customisers from exposing this entity manually themselves and start using it. If anyone else manages to find this within CRM, please let me know in the comments below!\nEmail Signatures Buried within the What\u0026rsquo;s New announcement page for the release, is the following small, but potentially significant, new feature announcement:\nSave time on your email correspondence and add a professional touch by adding an email signature. Use a default signature for everyday replies and new email messages, and then create other signatures for those special cases. You can also assign a default signature to a queue. When you change the From field from a user to a queue, the default signature changes automatically.\nThis has been one of my major headaches in attempting maintain consistent email signatures between our Outlook and CRM clients. The problem is compounded somewhat by the fact that the Email Template functionality does not offer a HTML editor. You therefore have to go through a number of different hoops in order to replicate this within CRM, and even then it doesn\u0026rsquo;t look 100% perfect when compared to our Outlook signatures. If there is therefore a better way of managing this in CRM moving forward, then I am very interested in finding out more.\nOne problem I\u0026rsquo;ve noticed straight away though - similar to Characteristics, there doesn\u0026rsquo;t appear to be any way to access this through the CRM interface! There is a new entity in the system called Email Signature though, which can be accessed via Advanced Find, but looks as if its not something that can be used yet. A real shame, as I was looking forward to seeing this in action.\nGuided Help Eagle eyed CRM users will immediately spot, when first logging into the new CRM update, that the help button looks distinctly different on the top right corner of the screen:\nThe new release introduces the Guided Help feature, which aims to offer more detailed, context sensitive information for users who need help within CRM. Clicking the above button will pop open a sidebar on the left of the screen, that changes depending on where you within CRM:\nThen, by clicking one of the green links that are listed (again, these change depending on where you are within CRM), you will then be given a tour of how to complete a specific action, such as creating a Lead record:\nI am really excited to see how this feature develops over the time, and hope that eventually you have the option to create your own guided tours that can then be tailored specifically to individual CRM systems. This feature is also a great example of how Microsoft is leveraging some of the benefits and features of the Azure platform to deliver better services to their online customers.\nFeedback Within previous versions of CRM, there was no out of the box way of recording post-sale information in regards to Products that are setup on the system. I would say that pretty much all businesses are not just concerned with how well their products sell, but also want to drill-down further and get feedback directly from their customers on a Product that has been sold to them.\nNow, in the Spring Wave release, the existing Feedback entity (previously used for Knowledgebase Articles) has been extended for use by other entities. To enable for your system/custom entities, all you need to do is tick the Feedback option within the Entity Definition page:\nJust to advise you, once it has been enabled for an entity, it cannot be disabled. Once enabled, you can then access it from the entity record page:\nSo now, system customisers can modify their CRM systems in order to enable Product feedback and feedback for any type of record. For example, you could configure Feedback for the Competitors entity as well, in order to allow sales users to capture feedback when speaking to potential customers about a supplier they are currently using.\nCRM Portals One of Microsoft\u0026rsquo;s most high-profile acquisitions last year was ADX Studio, a company that provides customer portal solutions for CRM - the great benefit being that you don\u0026rsquo;t need to be an ASP.NET whiz in order to setup highly functional, attractive portal systems that feed data into and out of CRM. Starting from the 2016 Spring Wave, Microsoft are starting to more tightly integrate the product with CRM, to the extent that portals will now be manageable from within CRM itself and integrated as part of the Office 365 subscription plans, effectively becoming an add-on product that can be purchased from the portal. Organisations who are considering to use ADX Studio may be advised to wait to see what happens with this, as it is likely that Microsoft may introduce price breaks on the product in order to entice CRM Online uses to try the product. In the meantime, trial ADXStudio account are still available on their website for those who are curious. I am hoping to start getting to grips with the product within the months ahead, so expect a blog post or two on the subject in future.\nWhat are you most excited about as part of the Spring Wave release? Let me know in the comments below.\n","date":"2016-05-08T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/whats-new-in-dynamics-crm-2016-spring-wave/","title":"What's New in Dynamics CRM 2016 Spring Wave"},{"content":"When I first started looking more closely at some of the new features within Dynamics CRM 2016, my initial thoughts was this was that this was a release for the fans (i.e. CRM Administrators, Customisers \u0026amp; Developers). Putting aside some of the big headline grabbing features such as the Interactive Service Hub and Word Templates, there looks to be a lot of subtle changes underneath the hood which those who work with CRM on a daily basis will be jumping for joy at.\nOne of the fundamental concepts that needs to be grasped as part of any CRM development work is Solutions. This includes understanding the differences between managed/unmanaged solution files, how to export/import solutions between different environments and how managed/unmanaged customisations are handled by CRM. There are many debates and discussions online regarding the subject, but the general rule of thumb is to use unmanaged solutions for the majority of your customisations, and only use managed solutions for when you are an ISV developing a bespoke solution to performs a specific function and/or links in with a separate application.\nOne of my personal bugbears regarding Solutions is the time and effort involved as part of doing an update. If, for example, you have just one Solution file for your entire businesses CRM customisations, then over time pushing out a solution update into your Production environment can take up to 30 minutes or more to complete - even if your update only contains, for example, a field name change! This can make pushing out hot-fixes or urgent updates more time-consuming and lead to delays in addressing problems within a live production system.\nWhilst doing some digging around within CRM 2016, I was very excited to see the following 2 new buttons on the Solutions page:\nCould it be that we now have a better and more efficient way of pushing out small parts of our CRM customisations and urgent hot-fixes? Before we take a look at how these buttons work in practice, it is useful to first explain what each one does in more detail:\nClone a Patch Clone a Patch creates a patch of your main solution, which can contain a selection of CRM customisations that you wish to export from the system. A solution can have multiple patch solutions. One thing to point out is that you must manually add the components you wish to include as part of your patch solution into the Patch solution file. When, for example, you add in the Entity which you wish to include in the patch, you can specify individual entity forms, views, charts etc. that you want to include, something which I really like:\nWhilst there is at least one patch of a solution present in your CRM system, you will not be able to edit the main solution file. CRM presents you with a yellow banner message to inform you of this when you enter the CRM Solution in question:\nIn order for the solution to become available for customisation again, you will need to either a) delete all patch solution files that derive from the main solution or b) use the Clone Solution button, which will be explained in more detail next.\nClone Solution Clone Solution enables you to quickly combine together a solution and all of its descendant patches back into one solution file. Say for example, you have 1 Main Solution file and 3 Patch Solutions from the Main Solution. Clicking the Clone Solution button will combine all of these together again as part of 1 solution file. As a result, this will then allow you to customise the main solution file again without any issues.\nSo how does this work in practice then? Let\u0026rsquo;s find out:\nTo start of with, here is a custom entity within a new solution file. Nothing has been customised for this entity at this stage:\nThis is then exported out of the system as a solution file and imported into our target environment. Next, let\u0026rsquo;s say we want to add a newly created field to our entity as well as making some other changes as well within the main solution. These are in an incomplete stage, so therefore we just want to export our new field only as a patch solution for now.\nOn the Solutions page, we select our Main solution file and press the Clone a Patch button. We are greeted with a screen similar to the below where we can specify a few optional details, before then confirming:\nNext, we then have to go into our Patch Solution and add in the elements that we wish to push out as part of a patch. In this case, we want to add a new field to our Test Entity. We therefore go into a Patch Solution file and go to Add Existing\u0026hellip; in order to add in our Test Entity. You will be greeted with the Entity Assets window above and, in this example, we want to include all assets so we just press Finish to add this in.\nNow we want to add in our new field:\nAs above, this is then imported into our target CRM environment, as you would do normally for any other solution file. It even looks the same!\nNow, lets say we want to pull together all of our other changes as part of a traditional solution update. We also want to rename our Testing Field as follows:\nWe select our Solution file and click on the Clone Solution button. At this stage, we can modify the display name of the Solution and also specify a new version:\nWhen we click Save, the patch solution file will vanish from our solution file, replaced by our one solution file which can now be edited and exported accordingly.\nAnd that\u0026rsquo;s it! It takes a while to get your head around, particularly if you have been used to working with Solutions in previous versions of CRM, and I think it will take a while before this starts to become widely used (to be honest, I have not yet used it as part of some of the recent solution updates I have had to do 😳). Still, it is good to know that for those who are just coming into CRM Administration/Development, you can get to grips with this feature straight away without having to concern yourself too greatly with the \u0026ldquo;old\u0026rdquo; method. Just don\u0026rsquo;t be surprised if you hear from CRM veterans something like \u0026ldquo;Back in my day, we just had Solution updates - and we were lucky to have them!\u0026rdquo;\nLet me know in the comments below if you have started to use Solution Patches yourself within your CRM 2016 environment.\n","date":"2016-05-01T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/a-better-solution-dynamics-crm-2016-changes-to-managedunmanaged-solutions/","title":"A Better Solution: Dynamics CRM 2016 Changes to Managed/Unmanaged Solutions"},{"content":"This is the second part of my 2-part series, continuing our evaluation of the new Word Templates feature versus the traditional CRM/SSRS Reports route. Word Templates were recently introduced as part of CRM 2016, and are one of the big new features that has got me really excited about the future of CRM. What I am keen to discover is if they can be utilised as effectively as .rdl CRM Reports to produce high quality and professional looking documents.\nLast week we took a look at the process and steps involved in setting up a Report. So now, let\u0026rsquo;s make a start and go through the step-by-step process of setting up a Word Template document on a CRM 2016 instance:\nFrom a setup point of view, there is much less that is required in order to start working with Word Templates: CRM Online 2016: If you don\u0026rsquo;t have access to a CRM Online 2016 instance, you can either reset a Sandbox Instance or start a free 30 day trial. My understanding is that Word Templates have been introduced as part of 2016 On-Premise CRM, but I\u0026rsquo;m unable to confirm this. Microsoft Word 2013/2016 Log into your CRM instance and navigate to a supported record type. For this example, we are going to use Lead: On the Form Ribbon, click on the ellipse to expand the button Options and select \u0026lsquo;Word Templates\u0026rsquo;: You will then be greeted with the \u0026lsquo;Create template from CRM data\u0026rsquo; window, enabling you to specify the template type you want to create (Excel or Word), confirm the entity data you wish to use and choose whether to upload an existing template or create one from scratch. We\u0026rsquo;ll click on Word Template and then press \u0026lsquo;Select Entity\u0026rsquo; to proceed: Finally, CRM will display an informational window which gives the user a quick summary of the different related record types and, therefore, what additional fields can be displayed on your Word Template. This can be quite useful for novice CRM users or for those who are unfamiliar with how a particular CRM system has been customised. When you are ready to continue, press \u0026lsquo;Download Template\u0026rsquo; and then save the document to your local computer: Once downloaded, open the template. You\u0026rsquo;ll be greeted with a blank Word document, similar to the below: Don\u0026rsquo;t panic though! The document has everything we need to make a start, but first we need to ensure that the Developer tab is visible. To switch this on, you will need to:\nGo to File -\u0026gt; Options to open up the Word Options window: Go to Customize Ribbon and make sure that the Developer Tab check-box is ticked. Once this is done, press OK: On the Developer Tab, you should see a button called \u0026lsquo;XML Mapping Pane\u0026rsquo;. Click this button to open up a new pane to the right of the screen: Under the \u0026lsquo;Custom XML Part\u0026rsquo; dropdown, you should see an option similar to this (may be different depending on the entity that you are building the template for): urn:microsoft-crm/document-template/lead/4/\nOnce selected, you should see all of your entity fields appear below:\nWith the XML Mapping configured correctly, our CRM data fields can be moved onto our Word Document. To copy across each field onto the Word Document, all you need to do is right click the field, select \u0026lsquo;Insert Content Control\u0026rsquo; and then \u0026lsquo;Plain Text\u0026rsquo;. Your field will be added onto your empty Word Document into the cursor area: Now we can start to build our report! Here\u0026rsquo;s one I made earlier, with the help of some of the existing templates that Word provides for Letters: With our document completed, we can now save it and upload it into CRM. To do this, we first need to navigate back to the \u0026lsquo;Create template from CRM data\u0026rsquo; from step 4) and, this time, select \u0026lsquo;Upload\u0026rsquo; instead of \u0026lsquo;Select Entity\u0026rsquo; to be greeted with the document upload window:\nOnce we have uploaded our document, we can then select our newly uploaded document from the Word Templates button on our Lead form to download the document, populated with our specific record information: Conclusions\nSo is it time to ditch .rdl Reports in favour of Word Templates then? I would certainly say so for instances where you just want to create documents which require very little data manipulation and where the key focus is around presentation of the document. Microsoft Word is certainly a much more accessible tool than SSRS when it comes to quickly creating documents that look visually appealing. That\u0026rsquo;s not to say that .rdl Reports will not still have a role moving forward, particularly when requirements are a little more complex. For example:\nYou need to use a customised FetchXML report to return data that is filtered a certain way or is, for example, returning multiple \u0026lt;link-entity\u0026gt; fields. You are wanting to develop a report that a user needs to be able to filter at report run-time. You need to leverage some of the advanced functionality made available via SSRS Expressions. I therefore do not foresee a massive exodus towards Word Templates in the near future. It is more likely instead that Word/Excel Templates become the \u0026ldquo;preferred\u0026rdquo; report building tool, whereas .rdl Reports instead are used for \u0026ldquo;advanced\u0026rdquo; scenarios. I certainly am looking forward to using Word (and indeed Excel) templates moving forward and, as part of this, ensuring that some of our CRM Super Users receive training on how to use the feature as well. Giving users the power to create their own reports, using the tools they know and use every day, is very exciting!\nOne observation I had in regard to Word Templates is that Word would occasionally hang on my computer for approx. 15 seconds when moving some of the fields from CRM around the document. I am guessing this delay might be caused by the fact the document is attempting to connect back to your CRM instance on regular intervals. Apart from that, there were really no issues in terms of usability and setup - everything was really straightforward, quick and familiar. These are most certainly the key experiences that Microsoft are aiming for as part of this new feature, and I am reasonably confident that any teething problems will be addressed swiftly so as to encourage as many people as possible to start using this feature moving forward.\n","date":"2016-04-24T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/a-new-template-for-success-word-templates-vs-crm-reports-part-2/","title":"A New Template for Success? Word Templates vs. CRM Reports (Part 2)"},{"content":"For those who have done a lot of work previously creating bespoke document templates within CRM, the only effective way in which you would traditionally do this within CRM was via a Report. For the uninitiated, reports are .rdl files that are created within CRM (for very basic reports) or via SQL Server Data Tools (for more complex/bespoke reports).\nDynamics CRM 2016 has potentially flipped this approach on its head with the introduction of Word Templates. Now, you can use Microsoft Word to develop and customise a template that can then be populated with the information you need from a CRM record. Given that Word is a far more accessible and familiar tool for many people, this new feature could be a game changer and major boon to CRM Administrators and Developers.\nHaving worked myself with SSRS a lot previously to create .rdl reports (and developed quite a fondness for it as a result), I am really interested in seeing whether it is more efficient and easier to use Word Templates compared to a .rdl report. So let\u0026rsquo;s find out by creating a very basic custom introduction letter that a business could use to generate for a Lead record. We\u0026rsquo;ll attempt both methods to see the steps, effort and ease of use involved for each, and then decide which tool is the winner. Given the number steps involved, this will be split across two separate posts, with the first post focusing on SSRS,\nThe steps below assume that you have not previously authored any custom SSRS reports on your system and that you have a FetchXML query ready to return data from CRM. We\u0026rsquo;ll be using the following basic query to return Lead data that should work with any CRM instance:\n\u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;lead\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;fullname\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;companyname\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;telephone1\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;leadid\u0026#34; /\u0026gt; \u0026lt;order attribute=\u0026#34;fullname\u0026#34; descending=\u0026#34;false\u0026#34; /\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; Please note the enableprefiltering=\u0026ldquo;1\u0026rdquo; option above, if you are using your own custom FetchXML query, then this line we will need to be added to the \u0026lt;entity\u0026gt; node. Otherwise, your report will not upload correctly later on:\nFirst things first, you will need to download all of the software you need in order to build the reports. Technet has a great article that goes over what you need, the salient bits of which are as follows: SQL Server Data Tools in Visual Studio – This is a free download. If you already have an existing version of Professional, Ultimate or Premium Edition Visual Studio, then double check your computer first as you may already have SSDT installed. In which case, you can skip this step. Microsoft Dynamics CRM 2016 Report Authoring Extensions Both downloads are fairly small and shouldn\u0026rsquo;t take long to install. During the Report Authoring Extension Setup, you may be prompted to install Microsoft Online Services Sign-in Assistant, press Yes if this the case:\nOnce installed, open up SSDT using Start -\u0026gt; Search or in Program Files -\u0026gt; Microsoft SQL Server 2012: Once open, go to \u0026lsquo;File\u0026rsquo; -\u0026gt; \u0026lsquo;New\u0026rsquo; -\u0026gt; \u0026lsquo;Project…\u0026rsquo; You\u0026rsquo;ll see a Window similar to below: Select the \u0026lsquo;Report Services Project Template\u0026rsquo;, give your Project a name and then press \u0026lsquo;OK\u0026rsquo;. An empty solution file will be created, with the following folders visible on the right under Solution Explorer:\nTypically, at this point you would create your Shared Data Sources/Datasets. Unfortunately, CRM reports do not support these so we need to create our report first. Right click the Reports folder and go to \u0026lsquo;Add\u0026rsquo; -\u0026gt; \u0026lsquo;New Item…\u0026rsquo;. Select Report, give it a logical name, and press \u0026lsquo;Add\u0026rsquo;: The report will open for you within VS:\nNow we can start to create our data sources 🙂\nOn the left hand side, under \u0026lsquo;Report Data\u0026rsquo;, right click on the Data Sources folder and select \u0026lsquo;Add Data Source…\u0026rsquo; You\u0026rsquo;ll then need to enter your CRM instance settings: Tick the box where it says Embedded Connection On the dropdown list for Type, select \u0026lsquo;Microsoft Dynamics CRM Fetch\u0026rsquo; Under Connection String, enter the URL for your CRM instance. There are also two additional parameters that can be specified, but only really useful if your URL points to multiple CRM organizations and/or if you have Active Directory Federation setup. Under Credentials, if you are connecting to an On-Premise CRM instance, select \u0026lsquo;Use Windows Authentication (integrated security)\u0026rsquo;; if your are connecting to CRM Online, then select \u0026lsquo;Use this user name and password\u0026rsquo; and enter your CRM Online login details.We\u0026rsquo;ll test these connection string settings in a few moments Now that we have our Data Sources, we can create our Datasets (too many Data\u0026rsquo;s, eh?). Right click on Datasets and select \u0026lsquo;Add Dataset…\u0026rsquo;: Enter the following settings:\nEnter a name for your Dataset, ideally something descriptive in terms of what data the report is returning Tick \u0026lsquo;Use a dataset embedded in my report.\u0026rsquo; Under Data Source, select your newly created CRM Data Source Ensure that under Query Type, Text is selected In Query, copy + paste or (for bonus points!) manually type your FetchXML query Click Refresh Fields. After a few moments, you should then be able to click on \u0026lsquo;Fields\u0026rsquo; on the left pane and see all of the fields from CRM. This means its working! If you get an error message, then double check your connection string details Now the fun part – time to build the report! I will probably do a future blog post on some of the cool things you can do with SSRS. Suffice it to say, for the purposes of this post, we are just going to create a very basic report that displays some field data in a tablix. The report will be run from an individual record, thereby pulling in its data fields. First of all, we will add a tablix to the report. This is as simple as right clicking on the report area, selecting Insert -\u0026gt; Table. The tablix will then appear with a dotted line on the report area: Because there is only one dataset configured for the report, the tablix will automatically associate itself with this. We can therefore start to add in the field by click on the top left a column (where the small table icon appears) and then selecting each individual data field we want to add. SSRS will also add on the appropriate header text for each field you put onto the tablix: For this report, we will add on the fullname, companyname and telephone1 fields. After adding on some customised text to make the report look like a letter, the report should look something like this:\nVery basic I know! But the report will illustrate well what .rdl reports can do within CRM.\nNow that the report is finished, we can upload it onto CRM. To do this, we will need to locate the .rdl file for the report first. Press \u0026ldquo;Build\u0026rdquo; on your Visual Studio solution (in order to ensure that you have the most up to date version saved to disk) and then open up your Project solution folder in Windows Explorer to find your newly created report: Moving across now into CRM, we need to either go into our target Solution or alternatively customise the system via the Default Solution (not recommended for development/Production environments!). On the left-hand bar, we will see a section called Reports. Click on it, and then on New in the centre area to open a pop-up window, which lets us start adding a report into CRM: It\u0026rsquo;s useful at this stage to explain what some of the different fields/options mean:\nReport Type: There are three options here. The first is Report Wizard Report, which takes you through a wizard in CRM to enable you to create a basic report utilising Advanced Find-like filter criteria. You can even make a copy of an existing report and modify it, though you will be limited in your customisation options. The second option is Link to Webpage and enables you to specify a URL to a report that exists outside of CRM (e.g. SAP Crystal Report). The final option, and the one we will be using today, is Existing File and lets you upload a .rdl file into CRM. Parent Report: Similar to what you can do within SSRS Server, you can setup Parent/Child Reports to link together similar reports (e.g. you could have a Parent Account report, that then has a Child Report which shows all of the Contact Details for the Account) Categories: These are grouping options to indicate the type of report you are adding. Related Record Types: Here you need to specify which CRM entities the report can be run from. Display In: Indicates where the report will be visible from. You can select one or many of the options. Reports Area will display the report from the Sitemap Report area, Forms for related record types will make the report available from the Form of the entities you have specified in Related Record Types and, finally, Lists for related record types will do the same, but from the Entity View page instead. Because we want our report to run on Lead forms only, we need to ensure that the Related Record Types contains Lead, and to ensure our Display In options are configured accordingly. Finally, we need to upload a report by selecting Choose File and then populate the other details accordingly. Your pop-up window should look similar to the below when you are ready to save: Save and the Publish your changes. Assuming no problems, you should now see your Report on the Run Report button on your Lead form: As you can see, creating a CRM .rdl report for the first time can be quite time consuming and then, depending on the complexity of the report you are trying to create, could take even longer on top of that. I am therefore really interested in finding out as part of next week\u0026rsquo;s blog post what the process is like for Word Templates, and whether it is a quicker and more effective means of creating reports.\n","date":"2016-04-17T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/a-new-template-for-success-word-templates-vs-crm-reports-part-1/","title":"A New Template for Success? Word Templates vs. CRM Reports (Part 1)"},{"content":"A slightly shorter blog post this week, as I have spent the majority of the weekend upgrading our CRM Online instance to 2016 🙂 Due to the size of our current deployment and, because some of the new features such as Word Templates and Solution Patches will help make a real difference for our end-users/CRM Administrators, we decided to take the plunge early on. As with any upgrade, these can be potentially fraught with many risks for larger deployments, so I very quickly caveat that you should ensure that your business \u0026amp; CRM team are ready to make the jump and that you have put a solid plan in place in order to ensure the upgrade proceeds smoothly.\nMicrosoft have already published detailed articles that go through the upgrade process in detail, so today\u0026rsquo;s post is going to provide a more practical description of what to expect, both during and after the deployment.\nE-mail Notifications As the TechNet article above states:\nYou\u0026rsquo;ll receive reminders 90, 30, 15, and 7 days before the update begins\nThese will be sent to all CRM Administrators for your instance from the e-mail address crmonl@microsoft.com, so make sure you check your spam filter settings. The e-mail will look something like this:\nWhat happens when the upgrade starts You\u0026rsquo;ll get no e-mail notification at the exact moment when the upgrade starts, so you will need to ensure that your users have logged off your CRM instance at least 10-15 minutes before the upgrade states, just to be on the safe side. One observation on this point is that there is no (easy) way for CRM Online Administrators to ensure that all of their logged in users have left the application. I am really hoping that they make Administration Mode available to Production instances in the near future, as this will help greatly for scenarios like this or if you are, for example, planning on doing a Solution update during a specific time period.\nIf you attempt to login into your CRM instance, you\u0026rsquo;ll be greeted with a screen similar to the below:\nWhilst the upgrade was being carried out, I noticed that the Status moved from \u0026ldquo;Disabled\u0026rdquo; to \u0026ldquo;Pending\u0026rdquo; during the upgrade process. You can therefore refresh the page in order to gain a brief indicator of how the upgrade is going.\nHow Long the Upgrade Takes In our case, it took a little over an hour for the upgrade to complete, which was great! In this instance, we were upgrading from CRM 2015 Update 1, so I assume this had a factor in ensuring the upgrade completed so quickly. I would assume that, if the update process is similar to how you would upgrade On-Premise CRM 2013 to CRM 2016, (as an example) then each version update is applied in sequential order (CRM 2013 -\u0026gt; CRM 2015 -\u0026gt; CRM 2016), therefore adding to the amount of time it takes to finish the upgrade\nAs soon as the upgrade is finished, all CRM Administrators will get the following e-mail:\nWhat to Expect after the upgrade Again, this section is going to be more specific to scenarios where you are upgrading from CRM 2015 Update 1, though I would be interesting in finding out if these behaviours differ in any other upgrade scenario:\nThis is perhaps more applicable before the upgrade even begins, but if you are making the jump from a much earlier version of CRM, then you may encounter serious problems with any form level JScript. CRM has made some quite fundamental changes in recent versions in regards to the supported methods that should be used. Your CRM Administrators/Developers should have read and fully understood the What\u0026rsquo;s New for developers guide and, as part of any upgrade plan, some kind of test upgrade in a Sandbox/Trial CRM instance needs to be completed. This should be pretty obvious, but doing this ensures that you can identify and fix these kind of issues long before the upgrade begins. This is a strange one, but the upgrade reverted back our CRM Theme to the system default one. Fixing this is literally just a case of re-publishing your desired Theme from Customizations, but it is rather curious that this even happens in the first place. All Personal/System Settings remain as they were before the upgrade Any Waiting or in progress workflows will remain processing in the background If your organization is using the CRM for Client to Outlook, then you may be pleased to hear that the 2015 Client is compatible and works successfully with CRM 2016. Following the upgrade, when your users first open Outlook again, they will be greeted with the following screen: Once this has completed, CRM for Outlook will then operate as normal.\nIn our upgrade however, we did encounter a few cases where this did not always work. For example, the pop-up box above did not appear at all and whenever we tried to navigate to an Entity list in Outlook, Outlook would hang like so:\nI suspect though that the problems in our case were down to something to do with our organisations infrastructure, as opposed to solely the CRM Upgrade itself. We managed to resolve the above by simply re-creating the connection to the CRM Organization. After that, everything worked perfectly 🙂 I would recommend that you look at upgrading to the Client eventually though (like we are), as it is my understanding that it includes a number of performance tweaks.\nFinal Thoughts I am really looking forward to working with CRM 2016 more closely in the weeks and months ahead. The release feels very familiar, but also comes packed with a number of small, but significant, improvements or new features. These are specifically designed to give end users more flexibility in how they use CRM, and in helping make CRM Administrators/Developers lives easier.\nIs anyone else planning on or have already upgraded to CRM 2016? Let me know in the comments below.\n","date":"2016-04-10T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/upgrading-to-dynamics-crm-2016-observations-experience/","title":"Upgrading to Dynamics CRM 2016: Observations \u0026 Experience"},{"content":"When working with CRM extensively across multiple environments, you do start to get into a fairly regular rhythm when it comes to completing key tasks again\u0026hellip;and again\u0026hellip;and again. One of these tasks is the process of rolling out a Solution update. To briefly summarise, this is where you export an updated version of your solution from your development environment and then import the solution file on top of a previous version in a different instance. CRM will then go through the solution import file, detect and then apply any changes which have been made. The original solution will be overwritten as part of this and you will be informed at the end of the import on any warnings or errors that were encountered during import. Warnings will generally not cause the solution import to fail, whereas errors will stop the import completely.\nLike with anything, Solution updates can sometimes fall-over for a whole host of different reasons. They can fail altogether, or sometimes just hang and become unresponsive. If you are running On-Premise CRM, then you can interrogate the SQL database on the instance to see how your solution import is (or is not) progressing. Ben Hosking (whose blog is mandatory reading for anyone who works closely with CRM) has written a really useful post which contains the SQL query you need to use on your organization database in order to identify any problematic job imports. The good thing with this approach is, if the import has errored, the Data column contains the raw XML file that you are able to download via the CRM GUI using the \u0026lsquo;Download Log File\u0026rsquo; button below when a solution import proceeds as you would expect normally:\nYou can therefore very quickly drill-down to see if the solution import has failed due to a customisation issue. A common reason for failure may be due to duplicate logical name(s) for attributes, relationships etc.\nIf you are using CRM Online, then the first assumption (which I admittedly made) may be that there is no way in which to access the above information. Fortunately, there is an entity called \u0026lsquo;importjob\u0026rsquo; that is made available for access via FetchXML\nSo, for example, to return all solution imports that have gone through your CRM, you would use the following FetchXML query:\n\u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;entity name=\u0026#34;importjob\u0026#34; \u0026gt; \u0026lt;attribute name=\u0026#34;completedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;solutionname\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;progress\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;startedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;completedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdby\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;data\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;organizationid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdon\u0026#34; /\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; If you wanted to just return solution imports that are either stuck at 0% or have not yet successfully completed:\n\u0026lt;fetch version=\u0026#34;1.0\u0026#34; output-format=\u0026#34;xml-platform\u0026#34; mapping=\u0026#34;logical\u0026#34; distinct=\u0026#34;false\u0026#34; \u0026gt; \u0026lt;entity name=\u0026#34;importjob\u0026#34; \u0026gt; \u0026lt;attribute name=\u0026#34;completedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;solutionname\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;progress\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;startedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;completedon\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdby\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;data\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;organizationid\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;createdon\u0026#34; /\u0026gt; \u0026lt;filter type=\u0026#34;or\u0026#34; \u0026gt; \u0026lt;condition attribute=\u0026#34;progress\u0026#34; operator=\u0026#34;eq\u0026#34; value=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;condition attribute=\u0026#34;completedon\u0026#34; operator=\u0026#34;null\u0026#34; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; Sometimes your import may fail due to a problem with CRM itself. In these instances, the best course of action depends, once again, on your CRM deployment:\nFor On-Premise users, then the old IT adage applies here: try restarting the CRM and SQL Database Server instance. You may first need to locate the active process on SQL Management Studio that is performing the solution import and kill the process first. A database instance reset should automatically cancel this and prevent it from running again on instance startup, but its better to be safe than sorry. The only recourse for Online users is to log a support request with Microsoft via the Office 365 portal. It is best to provide as much evidence as possible up-front and be advised that the Microsoft support engineer may ask you to demonstrate the problem again, which might prove difficult to perform during normal working hours if the problem is happening on your Production instance. I was glad to discover that there is half way method of being able to interrogate possible platform issues yourself on CRM Online, but this particular example illustrates one of the drawbacks of CRM Online: little or no direct access to the organization database and/or instance level services. I would hope that in time Microsoft may develop the platform further in order to start exposing these elements to organization administrators. I would imagine that the vast majority of support queries that go through on the Office 365 portal would relate to requests that could be safely performed by CRM Administrators or Partners, leading to a cost and efficiency saving for Microsoft should this be implemented.\n","date":"2016-04-03T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/checking-crm-solution-import-progress-on-premiseonline/","title":"Checking CRM Solution Import Progress (On-Premise/Online)"},{"content":"For those businesses or individuals who are currently considering Dynamics CRM, one of the decisions that you will ultimately need to make is regarding whether you intend to use CRM Online or CRM On-Premise. For those whose first reaction to the previous statement is \u0026ldquo;Say what?\u0026rdquo;, heres a brief breakdown of the two different options:\nCRM Online: An instance of CRM that is accessible via Office 365. CRM On-Premise: An installation of CRM on your own server(s). The word on the street these days is all around cloud computing and services, and that all organisations should have most, if not all, of their infrastructure within a hybrid public/private cloud configuration. However, it could be that you are required to host your CRM within a specific location due to regulatory or contractual requirements for your business. Or that you already have existing costs allocated towards server infrastructure that must be used as part of your CRM project. The list of potential reasons are endless, which is why CRM On-Premise exists in the first place and is still an essential requirement for many organisations.\nSo here\u0026rsquo;s a breakdown of some of the factors to consider, and my recommendation on the best approach to go for:\nIf you are already using Exchange Online with other Office 365 services, then CRM Online is the way to go One of the potential headaches when it comes to configuring CRM for first use is around e-mail synchronisation, something which I have hinted at in a previous blog post. If you already use Exchange Online, then the setup steps involved are greatly reduced, as CRM will automatically detect your Exchange Online profile and settings if it is on the same tenant as your CRM Online instance. If not, then you\u0026rsquo;re going to have to look at other options such as the E-mail Router in order to link up your On-Premise Exchange Server, SMTP or POP3 e-mail system. These can be fiddly to setup and maintain, as you will require a dedicated machine that hosts the E-mail Router software and you may potentially have to liaise with other third party e-mail system providers in order to troubleshoot any issues.\nSometimes it\u0026rsquo;s nice having the latest new product without paying extra for it For On-Premise CRM, you would need to factor any future upgrade plans as part of your initial cost-investment into the system. Given the increased frequency of CRM releases, this could start adding up to big bucks after the first year or two. As a On-Premise customer as well, you will also miss out on any major updates in between versions, such as Update 1 last year for CRM Online. This was quite a fundamental and significant update, in my opinion, that helped to make CRM even easier to use. With CRM Online, you are always guaranteed to get the latest updates and thereby take advantage of some of the latest and best features available within CRM. The trade off with this is that you must upgrade to the latest version of CRM eventually. You\u0026rsquo;ll be offered a date and time for the upgrade and can delay it, but you can\u0026rsquo;t stay in the past forever! This could present issues if, for example, you have written bespoke customisations that are no longer supported or deprecated. Be sure you have read and fully understand how updates work in CRM Online before making your decision.\nEvaluate your internal resources first It could be, for example, that your organisation is moving from an internal application system that uses SQL Server as the backend database system and that you have several team members who you have invested heavily on T-SQL administration/development training. The great thing about On-Premise CRM is that this skillset will not be lost, as you will need to maintain and manage your organisation(s) databases. And, if you\u0026rsquo;re really nice, you can also let them write beautifully bespoke SSRS reports directly against your databases and let them do all sorts of other fun data integration pieces using SQL Server.\nThe flip side of this should be obvious, but if you and your organisation don\u0026rsquo;t know your SELECT\u0026rsquo;s from your WHERE\u0026rsquo;S, then CRM Online could be the best choice as you don\u0026rsquo;t have to worry about managing and maintaining a SQL database, as Microsoft handles all of this for you. You can instead focus yours and your team\u0026rsquo;s attention and learning potentially more relevant things relating to CRM (Online) directly\nDo you trust Microsoft? Its a serious question nowadays. In the world of cloud computing, can you say with 100% certainty that the organisation(s) where you are hosting or storing some of your business\u0026rsquo; key data and applications will a) ensure your data is kept securely and b) able to offer satisfactory guarantees in relation to service availability? Whilst (touching wood) Office 365/CRM Online outages have been few and far between, the risk is still ever omnipotent. You will therefore need to evaluate what the maximum amount of outage time is acceptable for your business and put in place procedures to ensure that your business can keep working (for example, nightly backups of your CRM Data so that you can still access your Data via a spreadsheet/database export). The benefit of having an On-Premise CRM system is that you will more than likely have control over your server machines, as well as all the data that is stored on them, and ensure your infrastructure is built to satisfy any concerns around outages or system failures.\nNice words for your Finance Team: Operational Expenditure is better than Capital Expenditure! Deciding to go with CRM Online could significantly simplify your organisations visibility over your month-by-month costs. If done correctly, you could even make the bold claim that you have successfully eliminated all capital expenditure (i.e. upfront software costs) costs relating to CRM systems within your business. Based on experience, most finance departments are happier knowing they have to pay X amount over a 12 month period, as opposed to being hit by large and unwieldy costs in a sporadic and uncertain way. So if you want to be make BFF\u0026rsquo;s within your finance/account team, then CRM Online is the fastest and best way to achieve this.\nLegacy Systems or that annoying finance system that\u0026rsquo;s 15 years old, but runs 20% of our business work and cannot be replaced Without traditional database access that On-Premise provides, it may prove difficult integrating your CRM system with any legacy system, particularly if it\u0026rsquo;s a non-SQL database. That\u0026rsquo;s not to say that it\u0026rsquo;s not possible to find a solution using CRM Online, but you may have to expand significantly more resources setting up staging environments with your CRM Online/Legacy System, or look at writing customised code that performs the tasks that you need in order to \u0026ldquo;glue\u0026rdquo; both of the systems together.\nMake sure your technical team understand the limitations of CRM Online clearly, and that their feedback is factored in as part of the decision making process I\u0026rsquo;ve already mentioned the most obvious limitation for CRM Online in the form of not having access to the CRM SQL Database. But there are other limitations too. For example, you are unable to directly query all of the information contained within the audit data entity. There is a very good (although outdated) article on TechNet which gives a flavour of some of the limitations within CRM Online. It is very important that as part of any scoping exercise that your technical team is fully aware of the limitations of CRM Online, so that any potential difficulties around integration or data access can be mitigated from the outset.\nConclusions - or Wot I Think If you are a small to medium business who are already using Office 365 or planning to move across in the near future, then CRM Online is the obvious and best choice to ensure the most streamlined user experience and ease of management and setup. If, however, you are a much larger organisation or are required to operate under specific compliance or regulatory requirements in respect to your business applications/data, then these are the types of scenarios where On-Premise CRM is pretty much an absolute requirement.\n","date":"2016-03-27T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/to-online-or-not-online-factors-to-consider-when-comparing-crm-onlineon-premise/","title":"To Online or Not Online: Factors to Consider when Comparing CRM Online/On-Premise"},{"content":"One of the great things about developing bespoke solutions for CRM is the ability to make changes to the sitemap navigation. For the uninitiated, the Sitemap is this area within CRM:\nThe areas and individual buttons can be modified to suit most requirements for organizations, to include links to custom entities, external applications or to an internal HTML/Silverlight web resource. As a result, CRM can be made to look highly bespoke and unique, as if it is a completely different CRM system altogether from the default setup.\nWe recently had a requirement to create a sitemap area button that would open a specific record. The record in question is one that will be updated frequently, so colleagues within the business require quick and easy access to it from the Sitemap area. We already know that this possible for opening specific entity views, as we have used this a number of times previously (for example, change the default view that opens when you click the \u0026lsquo;Accounts\u0026rsquo; button to \u0026ldquo;Accounts I Follow\u0026rdquo;). MSDN provides a great outline of how you go about doing this:\nTo display a list of entity records within the application for a SubArea set the Entity attribute value. This displays the default view for that entity and provides the correct title and icon.\nHowever, if you want to have a SubArea element that uses a specific initial default view, use the following Url pattern.\nUrl=\u0026quot;/_root/homepage.aspx?etn=\u0026amp;viewid=%7b%7d\u0026quot;\nSource: https://msdn.microsoft.com/en-gb/library/gg328483.aspx#BKMK_DisplayViewInApplicationUsingSiteMap\nThe question at this stage then is can we adapt the above method in order to open an entity record instead? Let\u0026rsquo;s give at a go, using our trusty XrmToolbox Sitemap Editor tool. The steps below assume that you already know how to use this tool and to make amends to the sitemap area.\nOn the above article page, we are told that in order to open an entity record via a URL, we need to provide the following query parameters:\netn: The logical name of the entity pagetype: In this instance, should be set to \u0026ldquo;entityrecord\u0026rdquo; id: The GUID for the CRM record to open. The best way to obtain this is to export the record to Excel and unhide all the columns; the GUID is then the value in the A column; you will need to change this to Upper Case via an Excel =UPPER function: Then, in order to ensure that the GUID is accepted correctly in the URL, we need to surround it with curly braces. As these character types are not accepted as part of a URL string, we need to provide the following substitute character strings:\n{ = %7B\n} = %7D\ne.g. {E16EE6D6-56B4-E511-80E2-2C59E541BD38} -\u0026gt; %7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\nSo let the trial and error begin! The most simple way of getting this to work would be to change the SubArea URL value to the full CRM instance URL. So, for example, our CRM Online URL would look something like this:\nhttps://mycrminstance.crm.dynamics.com/main.aspx?etn=test_mycustomentity\u0026pagetype=entityrecord\u0026id=%7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\nBut if we have separate development/production environments, then this is impractical as the link will not work when moving our solution between environments. Our preferred setup therefore is to look at using a relative URL path that works across different environments.\nWhat happens if we try adapting the URL example for views instead? So, in which case, our URL would be:\n/_root/homepage.aspx?etn=test_mycustomentity\u0026amp;pagetype=entityrecord\u0026amp;id=%7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\nThat\u0026rsquo;s a no then! The next step then is to take a closer look at the example URL\u0026rsquo;s provided and making some guesswork in regards to how relative URL\u0026rsquo;s function. If we assume then that our full URL is:\nhttps://mycrminstance.crm.dynamics.com/main.aspx?etn=test_mycustomentity\u0026pagetype=entityrecord\u0026id=%7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\nThen our relative URL would be:\n/main.aspx?etn=test_mycustomentity\u0026amp;pagetype=entityrecord\u0026amp;id=%7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\nAnd guess what? It works! One comment to make on this though is that the record opens in a brand new window within Internet Explorer \u0026amp; Google Chrome, so I would therefore presume that this is the case across all browsers. There are some additional query string parameters that can be specified in the URL to make this look more like a quick-edit \u0026ldquo;pop out\u0026rdquo; window:\ncmdbar: Setting this to \u0026ldquo;false\u0026rdquo; will hide the ribbon on the form navbar: Setting this to \u0026ldquo;off\u0026rdquo; will hide the sitemap navigation bar Our URL string and record window would therefore look this:\n/main.aspx?etn=test_mycustomentity\u0026amp;pagetype=entityrecord\u0026amp;id=%7BE16EE6D6-56B4-E511-80E2-2C59E541BD38%7D\u0026amp;cmdbar=false\u0026amp;navbar=off\nThe user can then make their changes to records, save and then close the window. Suffice it to say, it is good to know that it is possible to do this within CRM and that the trial and error steps involved were fairly minimal.\n","date":"2016-03-20T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/opening-individual-dynamics-crm-records-from-the-sitemap-area/","title":"Opening Individual Dynamics CRM Records from the Sitemap Area"},{"content":"The Scenario: You are running CRM Online in conjunction with some legacy database/application systems. These systems are setup with a SQL Server Reporting Services instance that is looking to either an SQL Server, OLE DB etc. database.\nThe Problem: You need to make data from your legacy systems visible within your CRM. The information needs to be displayed on the Entity Form and show specific information from the legacy database that relates to the CRM record.\nAdmittedly, the above is perhaps somewhat unlikely situation to find yourself in, but one which I recently had to try and address. I suppose the most straightforward resolution to the above is to just say \u0026ldquo;Get rid of the legacy system!\u0026rdquo;. Unfortunately, the suggestion didn\u0026rsquo;t go down to well when I voiced it myself\u0026hellip;\nSo at this point the next best answer looked to be try and utilise what we have within the existing infrastructure: an all singing, all-dancing SSRS and SQL Server database instance.\nWhat if we were to try uploading an .rdl file that includes a FetchXML and our SQL/OLE DB database data source into CRM? Whenever you try to perform this, you will get this error message:\nRats! So there is no way in which we can include a non-fetch XML Data Source to our separate SSRS report instance. So is there anything else within CRM that can be utilised to help in this situation? Let\u0026rsquo;s first take a quick look at the following nifty little feature within CRM, courtesy of our good friend MSDN:\nYou can use an IFRAME to display the contents from another website in a form, for example, in an ASP.NET page. Displaying an entity form within an IFrame embedded in another entity form is not supported.\nUse the getValue method on the attributes that contain the data that you want to pass to the other website, and compose a string of the query string arguments the other page will be able to use. Then use a Field OnChange event, IFRAME OnReadyStateComplete event, or Tab TabStateChange event and the setSrc method to append your parameters to the src property of the IFRAME or web resource.\nYou may want to change the target of the IFRAME based on such considerations as the data in the form or whether the user is working offline. You can set the target of the IFRAME dynamically.\nSource: https://msdn.microsoft.com/en-gb/library/gg328034.aspx\nHaving worked extensively with SSRS in the past, I am also aware that you can use an SSRS URL string in order to specify properties about how the report is rendered, its size and - most crucially - what the value of required parameters should be. The friend that keeps on giving has a great article that goes through everything that you can do with an SSRS report URL and also how to use Parameters as part of your URL. So in theory therefore, we can place an IFRAME on our form and then use JScript to access form-level field values and modify the IFRAME URL accordingly.\nHere are the steps involved:\nGo into Form Editor and add a new IFRAME to the form, specifying the following settings:\nName: The Logical name of the control, this will be required as part of the JScript code used later, so make a note of it. URL: As this is a mandatory field, you can specify any value here as it will change when the form is loaded by the user. This is not practical as we don\u0026rsquo;t want this to be displayed if, for example, the field that we are passing to the URL has no value in it. Our JScript code will sort this out in a few moments Label: This can be anything, and defaults to whatever is entered into the Name field Restrict cross-frame scripting, where supported: Untick this option Ensure that \u0026lsquo;Visible by default\u0026rsquo; is ticked Your settings should look something like this:\nCreate or modify an existing JScript Library for the form, adding in the following function (after modifying the values accordingly): function onLoad_LoadSSRSReport() { //First get the page type (Create, Update etc.) var pageType = Xrm.Page.ui.getFormType(); //Then, only proceed if the Form Type DOES NOT equal create, can be changed depending on requirements. Full list of form types can be found here: //https://msdn.microsoft.com/en-us/library/gg327828.aspx#BKMK_getFormType if (pageType != \u0026#34;1\u0026#34;) { //Get the value that you want to parameterise, in this case we are on the Account entity and need the Account Name var accountName = Xrm.Page.getAttribute(\u0026#34;name\u0026#34;).getValue(); //In order to \u0026#34;accept\u0026#34; the parameter into the URL, spaces need to be replaced with \u0026#34;+\u0026#34; icons accountName = accountName.replace(/ /g, \u0026#34;+\u0026#34;); //Now, get the the name of the IFRAME we want to update var iFrame = Xrm.Page.ui.controls.get(\u0026#34;IFRAME_myssrsreport\u0026#34;); //Then, specify the Report Server URL and Report Name. var reportURL = \u0026#34;https://myssrsserver/ReportServer?/My+Reports/My+Parameterised+Report\u0026amp;MyParameter=\u0026#34;; //Now combine the report url and parameter together into a full URL string var paramaterizedReportURL = reportURL + accountName; //Finally, if there is no value in the Account Name field, hide the IFRAME; otherwise, update the URL of the IFRAME accordingly. if (accountName == null) { iFrame.setVisible(false); } else { iFrame.setSrc(paramaterizedReportURL); } } } Add the function to the OnLoad event handler on the form. Now, when the form loads, it will update the IFRAME with the new URL with our required parameter. And there we go, we now have our separate SSRS instance report working within CRM! A few things to point out though:\nIf the report parameter supplied does not load any matching records, then SSRS will display a standard message to this effect. You would need to modify the report settings in order to display a custom message here, if desired. It is recommended that you have https:// binding setup on your report instance and supply this to as part of the setSrc method. http:// binding works, but you may need to change settings on your Web Browser in order to support mixed mode content. Full instructions on how to set this up can be found here. This may be stating the obvious here, but if your SSRS instance is not internet-facing, then you will get an error message in your IFRAME if you are not working from the same network as your SSRS instance. Fortunately, SSRS can be configued for an Internet deployment. The steps outlined in 1) can also be used to specify a non-parameterised SSRS report within an IFRAME dashboard too. I would recommend using the following SSRS system parameters as part of the URL though: rs:ClearSession=true rc:Toolbar=false e.g.\nhttps://myssrsserver/ReportServer/Pages/ReportViewer.aspx?%2fMy+Reports%2fMy+Non+Parameterised+Report\u0026amp;rs:ClearSession=true\u0026amp;rc:Toolbar=false\nOne of the most challenging things about any system migration is ensuring that information from other business systems can be made available, and it is good to know that CRM has supported approaches that can help to bridge the gap.\n","date":"2016-03-13T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/how-to-embed-a-non-crm-ssrs-report-into-a-crm-form/","title":"How to embed a non-CRM SSRS Report into a CRM Form"},{"content":"Working with Dynamics CRM can present some interesting challenges. What you tend to find is that you can pretty much say \u0026ldquo;Yes!\u0026rdquo; when it comes to doing most things you would expect from a CRM/database system, but there is a learning curve involved in figuring out the best approach to take. Often, as well, you may over-complicate matters and overlook a much easier solution to achieve what you need.\nTake, for example, modifying the FetchXML queries in a Public View that you have created programmatically. Let\u0026rsquo;s say you\u0026rsquo;ve created your own view within CRM using the following C# code snippet (adapted from the SDK sample):\nSavedQuery sq = new SavedQuery { Name = \u0026#34;My New View\u0026#34;, Description = \u0026#34;My view created in C# for the Account entity\u0026#34;, ReturnedTypeCode = \u0026#34;account\u0026#34;, FetchXml = fetchXml, LayoutXml = layoutXml, QueryType = 0 }; _customViewId = _serviceProxy.Create(sq); Console.WriteLine(\u0026#34;A new view with the name {0} was created.\u0026#34;, sq.Name); A few things to point out first with the above:\nIn order for this code to work, you would need to declare System.String values for fetchXml and layoutXml, as well as first connecting to CRM using the OrganizationServiceProxy (_serviceProxy). As well as specifying the FetchXML query you would like to use, you also have to specify a LayoutXML as a parameter in order to. Although Microsoft do have dedicated articles on MSDN that goes over the schema for this, there is a potential learning curve involved here for those who are unfamiliar with working with XML. ReturnedTypeCode is your entity logical name, which will need changing depending on the entity you are attempting to query Be sure to add in the appropriate namespace references, otherwise this code will not work. The code example above is all very well and good if you are just wanting to create a brand new view. But what happens if you need to change it in the future? We can modify the base properties of a view (Name, Description etc.) as well as the column layout via the CRM GUI, but when we attempt to modify the filter criteria (i.e. the FetchXML query), we will notice that the option is not available to use:\nThe next logical step would therefore be to look at creating some C# code that would take the existing view and modify the fetchXML query property. Unfortunately, Microsoft have not provided code examples on how this can be done, although it is in theory possible via the many methods at your disposal through the SDK.\nRather then spend days and potentially weeks writing a bespoke piece of code to do the job, it was then that I realised that I was being a little dense (as tends to happen) and that the Solution was sitting right in front of me. See what I did there?\nWhilst the general rule of thumb is \u0026ldquo;DON\u0026rsquo;T DO IT!!\u0026rdquo; when it comes to modifying an exported solution file, it is possible to do and pretty much anything within a solution file can be changed or modified to suit a particular requirement. And, as luck would have it, modifying Public Views (either ones created by yourself or system ones) is a supported task that you can perform on the solution file:\nDefinitions of views for entities are included in the customizations.xml file and may be manually edited. The view editor in the application is the most commonly used tool for this purpose. Editing customizations.xml is an alternative method\nSource: https://msdn.microsoft.com/en-gb/library/gg328486.aspx\nSo, in order to modify a custom Public Views FetchXML query, all you would need to do is:\nCreate a temporary, unmanaged solution file containing the entity with the custom Public View you want to change. Export as an unmanaged solution, unzip and open the customizations.xml file either in Notepad, Visual Studio or the XML editor program of your choice Use Ctrl + F to locate the savedquery node of the view you wish to change. It should look like this: \u0026lt;savedquery\u0026gt; \u0026lt;IsCustomizable\u0026gt;1\u0026lt;/IsCustomizable\u0026gt; \u0026lt;CanBeDeleted\u0026gt;1\u0026lt;/CanBeDeleted\u0026gt; \u0026lt;isquickfindquery\u0026gt;0\u0026lt;/isquickfindquery\u0026gt; \u0026lt;isprivate\u0026gt;0\u0026lt;/isprivate\u0026gt; \u0026lt;isdefault\u0026gt;0\u0026lt;/isdefault\u0026gt; \u0026lt;returnedtypecode\u0026gt;1\u0026lt;/returnedtypecode\u0026gt; \u0026lt;savedqueryid\u0026gt;{8e736028-47c7-e511-8107-3863bb345ac8}\u0026lt;/savedqueryid\u0026gt; \u0026lt;layoutxml\u0026gt; \u0026lt;grid name=\u0026#34;resultset\u0026#34; object=\u0026#34;1\u0026#34; jump=\u0026#34;firstname\u0026#34; select=\u0026#34;1\u0026#34; preview=\u0026#34;1\u0026#34; icon=\u0026#34;1\u0026#34;\u0026gt; \u0026lt;row name=\u0026#34;result\u0026#34; id=\u0026#34;accountid\u0026#34;\u0026gt; \u0026lt;cell name=\u0026#34;name\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt; \u0026lt;cell name=\u0026#34;statecode\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt; \u0026lt;cell name=\u0026#34;statuscode\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt; \u0026lt;cell name=\u0026#34;ownerid\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt; \u0026lt;cell name=\u0026#34;createdon\u0026#34; width=\u0026#34;150\u0026#34; /\u0026gt; \u0026lt;/row\u0026gt; \u0026lt;/grid\u0026gt; \u0026lt;/layoutxml\u0026gt; \u0026lt;querytype\u0026gt;0\u0026lt;/querytype\u0026gt; \u0026lt;fetchxml\u0026gt; \u0026lt;fetch version=\u0026#39;1.0\u0026#39; output-format=\u0026#39;xml-platform\u0026#39; mapping=\u0026#39;logical\u0026#39; distinct=\u0026#39;true\u0026#39;\u0026gt; \u0026lt;entity name=\u0026#39;account\u0026#39;\u0026gt; \u0026lt;attribute name=\u0026#39;createdon\u0026#39; /\u0026gt; \u0026lt;attribute name=\u0026#39;statuscode\u0026#39; /\u0026gt; \u0026lt;attribute name=\u0026#39;ownerid\u0026#39; /\u0026gt; \u0026lt;attribute name=\u0026#39;name\u0026#39; /\u0026gt; \u0026lt;attribute name=\u0026#39;statecode\u0026#39; /\u0026gt; \u0026lt;attribute name=\u0026#39;accountid\u0026#39; /\u0026gt; \u0026lt;order attribute=\u0026#39;name\u0026#39; descending=\u0026#39;false\u0026#39; /\u0026gt; \u0026lt;link-entity name=\u0026#39;email\u0026#39; from=\u0026#39;regardingobjectid\u0026#39; to=\u0026#39;accountid\u0026#39; alias=\u0026#39;ab\u0026#39; link-type=\u0026#39;outer\u0026#39;\u0026gt; \u0026lt;attribute name=\u0026#39;regardingobjectid\u0026#39; /\u0026gt; \u0026lt;/link-entity\u0026gt; \u0026lt;link-entity name=\u0026#39;lead\u0026#39; from=\u0026#39;parentaccountid\u0026#39; to=\u0026#39;accountid\u0026#39; alias=\u0026#39;al\u0026#39; link-type=\u0026#39;outer\u0026#39;\u0026gt; \u0026lt;link-entity name=\u0026#39;email\u0026#39; from=\u0026#39;regardingobjectid\u0026#39; to=\u0026#39;leadid\u0026#39; alias=\u0026#39;lp\u0026#39; link-type=\u0026#39;outer\u0026#39;\u0026gt; \u0026lt;attribute name=\u0026#39;regardingobjectid\u0026#39; /\u0026gt; \u0026lt;/link-entity\u0026gt; \u0026lt;/link-entity\u0026gt; \u0026lt;filter type=\u0026#39;and\u0026#39;\u0026gt; \u0026lt;condition entityname=\u0026#39;ab\u0026#39; attribute=\u0026#39;regardingobjectid\u0026#39; operator=\u0026#39;null\u0026#39; /\u0026gt; \u0026lt;condition entityname=\u0026#39;lp\u0026#39; attribute=\u0026#39;regardingobjectid\u0026#39; operator=\u0026#39;null\u0026#39; /\u0026gt; \u0026lt;filter type=\u0026#39;or\u0026#39;\u0026gt; \u0026lt;condition entityname=\u0026#39;ab\u0026#39; attribute=\u0026#39;createdon\u0026#39; operator=\u0026#39;olderthan-x-weeks\u0026#39; value=\u0026#39;1\u0026#39; /\u0026gt; \u0026lt;condition entityname=\u0026#39;ab\u0026#39; attribute=\u0026#39;createdon\u0026#39; operator=\u0026#39;null\u0026#39; /\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;/entity\u0026gt; \u0026lt;/fetch\u0026gt; \u0026lt;/fetchxml\u0026gt; \u0026lt;IntroducedVersion\u0026gt;1.0\u0026lt;/IntroducedVersion\u0026gt; \u0026lt;LocalizedNames\u0026gt; \u0026lt;LocalizedName description=\u0026#34;My New View\u0026#34; languagecode=\u0026#34;1033\u0026#34; /\u0026gt; \u0026lt;/LocalizedNames\u0026gt; \u0026lt;Descriptions\u0026gt; \u0026lt;Description description=\u0026#34;My view created in C# for the Account entity\u0026#34; languagecode=\u0026#34;1033\u0026#34; /\u0026gt; \u0026lt;/Descriptions\u0026gt; \u0026lt;/savedquery\u0026gt; Modify the FetchXML query within the node to your updated query (Optional) If your FetchXML query is simply making changes to the filter criteria, you can skip this step. Otherwise, if you have new fields that you would like to be displayed as part of the changes, you will also need to modify the node so that it contains your new fields. Save the changes back into the solution file and then import the solution back into CRM. Test your view by opening it within the application and confirm everything looks OK. I\u0026rsquo;m sure you\u0026rsquo;ll agree that this is definitely a much easier and simple way to make changes to your view. Just be careful when working within the solution file that you don\u0026rsquo;t accidentally delete/overwrite something!\n","date":"2016-03-06T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/modifying-systemcustom-views-fetchxml-query-in-dynamics-crm/","title":"Modifying System/Custom Views FetchXML Query in Dynamics CRM"},{"content":"One of our company directors has a big thing for Americanisms and American spellings over the \u0026ldquo;proper\u0026rdquo; English spellings/pronunciations. So it came as no surprise that when we first showed him CRM, he immediately pointed out the default names on the Address field composite box:\nSpecifically, he didn\u0026rsquo;t like \u0026ldquo;State/Province\u0026rdquo; or \u0026ldquo;Zip/Postal Code\u0026rdquo; (for those of you who are not aware, \u0026ldquo;State/Province is the equivalent to a County over here and we refer to \u0026ldquo;Zip/Postal Code\u0026rdquo; generally as Postcode).\nBeing good and sympathetic fellow Englanders, our team went away to investigate. Changing the display name of each individual address field didn\u0026rsquo;t work, as you may have expected. It turns out that that the composite address fields can instead be accessed and changed using the Xrm.Page.getControl JScript method to return each individual field and then set the label accordingly. But how do we find out the name of each control to access? Microsoft have a very informative article on MSDN that goes through Composite Controls and how they operate:\nFrom the article:\nYou can access the individual constituent controls displayed in the flyout by name. These controls use the following naming convention: _compositionLinkControl_. To access just the address_line1 control in the address1_composite control you would use: Xrm.Page.getControl(\u0026ldquo;address1_composite_compositionLinkControl_address1_line1\u0026rdquo;).\nSource: https://msdn.microsoft.com/en-gb/library/dn481581.aspx\nAll we need therefore is the attribute logical name for each of the constituent fields on the control and to then add this to the getControl method!\nSo after a quick 5-10 minutes of coding, we now how the following nice little Jscript function that fires OnLoad for all entity forms that use the Composite Address control for Address 1 and Address 2 fields:\nfunction changeAddressLabels() { Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address1_composite_compositionLinkControl_address1_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line1\u0026#34;).setLabel(\u0026#34;Address 1\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line2\u0026#34;).setLabel(\u0026#34;Address 2\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_line3\u0026#34;).setLabel(\u0026#34;Address 3\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_city\u0026#34;).setLabel(\u0026#34;Town\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_stateorprovince\u0026#34;).setLabel(\u0026#34;County\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_postalcode\u0026#34;).setLabel(\u0026#34;Postal Code\u0026#34;); Xrm.Page.getControl(\u0026#34;address2_composite_compositionLinkControl_address2_country\u0026#34;).setLabel(\u0026#34;Country\u0026#34;); } A victory for our team and for England – cup of tea, anyone?\nDoes anyone else have any experience doing interesting things with Composite Controls? Please leave a comment below if you have.\n","date":"2016-02-28T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/renaming-crm-composite-address-fields-to-the-proper-way/","title":"Renaming CRM Composite Address Fields to the \"Proper\" Way"},{"content":"Microsoft Dynamics CRM comes with a number of out of the box Security Roles that can be used in order to give users the correct permissions. Whilst this is helpful, they generally won\u0026rsquo;t be a good fit for most organisations and a custom security role will be required in order to get the correct mix of permissions. These can be either created from scratch or be based off one of the system defaults. Regardless of how you go about it, the dreaded risk of permissions errors is ever present and it can be very difficult at times to figure out which CRM feature relates to what security permission; it doesn\u0026rsquo;t help as well when some of the system entity logical names are entirely different from their display names!\nA good case in point is Server-Side Synchronisation, a brilliant feature that takes a lot of the headache out of setting up your colleague\u0026rsquo;s e-mail addresses on CRM. But, if you decide to create your own custom security role in Dynamics CRM 2015 or earlier, you may end up running into this very frustrating error message when attempting to test and enable your users\u0026rsquo; mailbox:\nWell, at least we\u0026rsquo;ve got an error message - what does our best friend Google say? Rather annoyingly, there isn\u0026rsquo;t much that comes back search wise, not even an official page from Microsoft that provides a list of the permissions that are needed in order to use this feature.\nA (not so quick) support case with Microsoft in order to find out just what permissions I need to increase/add onto my role will likely result in an answer similar to this:\n\u0026ldquo;In order to resolve the issue, make a copy of an existing security role and then reduce the privileges accordingly, as there are some hidden privileges within these roles that affect this feature.\u0026rdquo;\n\u0026ldquo;Hidden permissions\u0026rdquo; you say? That smells suspicious and is something that I have never come across in my working with CRM (though I am of course happy to be stood corrected). Also, what if in reducing the permissions to suit my businesses requirement, I accidentally remove the privileges that are needed for this work? Looks like I\u0026rsquo;m going to have to find out which privileges are needed the hard way.\nSo, after some trial and error, I can now provide a complete list of all the permissions that you need to have on your security role in order to Server Side Sync to work successfully. Please note the below assumes that you already have a separate security role setup that gives relevant permissions on the Appointment, Contacts and Activities entities within CRM:\nIncoming/Outgoing E-mail\nEmail Server Profile Organization level Read Mailbox User level Create, Read \u0026amp; Write Appointments, Contacts and Tasks\nOrganization Organization level Read Sync to Outlook Full Privileges With all of the these privileges assigned, our test and enable of the mailbox works successfully:\nHopefully this helps someone who has spent countless hours pulling their hair out on how to get this working.\nFor those of you that are upgrading to CRM 2016 in the near future, there\u0026rsquo;s some good news relating to this: an extra button has been added on the error message that lets you expand it and view the system privilege name that is missing:\nSo based on the above message of \u0026ldquo;prvReadOrganization privilege\u0026rdquo;, we know that we need to give Read Privilege on the Organization entity! This is definitely a big help and a welcome new feature to have, as you can then go through and gradually add the permissions missing until everything is working. It\u0026rsquo;s little things like this which is making me more and more excited about upgrading to 2016 in the near future.\nDoes anyone else have any tips or advice on how to get certain features within CRM and what privileges are needed? Please use the comments below to share.\n","date":"2016-02-21T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/server-side-sync-permissions-for-custom-security-roles/","title":"Server-Side Sync Permissions for Custom Security Roles"},{"content":"First of all, I cannot take credit for the solution in this post. My thanks to the awesome guys over at iTG Technologies for researching and finding the fix for this.\nIf your organisation is on Office 365 and is still currently running Office 2013, then you may have noticed the following message has suddenly popped up on your Office programs recently:\nThe bad news is that this message turns up of it\u0026rsquo;s own accord. This is not so much a problem if you are a home/single user, but could cause huge issues for enterprise IT departments who are holding off on upgrading due to add-in compatibility, not yet being able to test the applications themselves and problems with the computer environment itself (e.g. computers not meeting the minimum/recommended specifications).\nThe good news is that this error message can be disabled either via a Group Policy or by making the following change within the registry on the affected computers:\nHKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\office\\15.0\\common\\officeupdate\nAdd the following value under the office update subkey:\n\u0026ldquo;enableautomaticupgrade\u0026rdquo;=dword:00000000\nSource: https://support.microsoft.com/en-us/kb/3097292\nWhilst it is generally better to ensure you are on the latest versions of software, Office 2016 is still very early in its overall life-cycle and it makes prudent sense to delay rolling this out across organisations for a few more months at least.\nHas anyone already upgraded or rolled out Office 2016 across their organisations yet? I would be interested in hearing your views/comments on how it is working for you.\n","date":"2016-02-16T00:00:00Z","image":"/images/Microsoft365-FI.png","permalink":"/how-to-disable-update-to-office-2016-in-office-2013-products/","title":"How to Disable Update to Office 2016 in Office 2013 Products"},{"content":"Microsoft has recently released four new exams for Dynamics CRM 2016. The first thing to point out is that these exams do not replace/supersede the exams for 2015, which are likely to stay around for at least another 2-3 years. Therefore, passing these exams should be a definite goal for anyone who works extensively with CRM or is planning on upgrading to 2016 soon. The four new exams are as follows:\nMB2-710: Microsoft Dynamics CRM Online Deployment MB2-712: Microsoft Dynamics 2016 Customization and Configuration MB2-713: Microsoft Dynamics 2016 Sales MB2-714: Microsoft Dynamics 2016 Customer Service There are some interesting things to point out here. Microsoft has chosen not to provide an updated exam for MB2-708: Microsoft Dynamics CRM Installation, which focuses on On-Premise CRM installation and configuration. I\u0026rsquo;ve not yet been through a Dynamics CRM 2016 Server installation, so I would assume that not much has changed on this side of things. This would appear to match up with the look and feel of CRM 2016, which is very much the same at first glance to CRM 2015 Update 1. No significant changes must mean no new exam, in which case.\nOn first glance, the new CRM Online Deployment exam looks precisely the same - even down to the title! Look more closely however, and you will see there are some major differences in the learning objectives. This bit in particular stuck out for me:\nManage related services (10% - 15%)\nDescribe related services Identify related online services; integrate Microsoft Social Engagement with Microsoft Dynamics CRM Online; manage campaigns with Microsoft Dynamics Marketing Integrate Yammer and SharePoint Online Describe Yammer and SharePoint Online; identify SharePoint Online integration types; describe the integration process Integrate OneNote, Skype, Skype for Business, Office 365 Groups, and OneDrive for Business\nCompare Dynamics CRM Notes and OneNote; identify storage location for OneNote notebooks; configure OneNote integration; integrate Skype and Skype for Business; identify limitations for Skype and Skype for Business; describe Office 365 Groups; identify requirements for Office 365 Groups; integrate Office 365 Groups with Microsoft Dynamics CRM Online As well as some of the expected mentioned platforms such as Yammer and SharePoint, the relatively recent Office 365 Groups also makes an appearance. Suffice to say, the integration element with other Microsoft applications plays a much more significant role this time around, fitting in with the general strategy behind CRM Online/Office 365. That\u0026rsquo;s great news if you are familiar with the Office 365 platform, but this may present a challenge for those who primarily have CRM On-Premise experience and are looking to make the jump onto CRM Online. Fortunately, Microsoft offers free trials of both Office 365 and CRM Online, so you can quite easily spin up a CRM Online instance and Exchange Online, SharePoint etc. so you can get up to speed.\nLastly, something which I\u0026rsquo;m disappointed about, there is no updated version of MB2-701: Extending Microsoft Dynamics CRM 2013 AKA the CRM Developers exam. The name of the exam tells you everything you need to know, it\u0026rsquo;s for CRM 2013! Whaaaaat!?! Unfortunately, CRM developers don\u0026rsquo;t seem to be getting much love these days… 🙁 Admittedly, from a developers point of view, there hasn\u0026rsquo;t been too much that has changed, however, CRM 2015 Update 1 rather sneakily altered some of the supported methods for form level JScript. This has caused some issues with coding that was written pre-update 1. On the balance of things, therefore, an updated Developers exam is something that would be great to see, and I was very much looking forward to seeing this as part of the newly refreshed exam list. It looks like I will have to attempt to sit MB2-701 at some point this year instead.\nI\u0026rsquo;ve already been able to sit and successfully pass MB2-712, so I\u0026rsquo;ve got my fingers crossed that I will be able to pass all three of other exams too. Better start hitting the books soon!\nDo you have any advice or tips for these exams?\nDue to Microsoft\u0026rsquo;s NDA relating to each of the exams, I cannot disclose any details regarding the exam and its contents (as a result, any discussion in the comments that are in breach of this will be promptly removed). My best advice is to really focus on the learning objectives for each exam and, ideally, have a CRM instance open while you are revising so you can get some practice as you learn new concepts. PartnerSource has some great resources to download and test your knowledge on, or you could look at attending one of the many training courses out there from Microsoft Training Providers. Those of you who have passed previous exams shouldn\u0026rsquo;t have any major trouble, but it\u0026rsquo;s always good to refresh your memory or go over things within CRM that you don\u0026rsquo;t use that often.\nGood luck to anyone who is sitting these exams – particularly if it\u0026rsquo;s your first time sitting them. Let me know how you get on in the comments below.\n","date":"2016-02-14T00:00:00Z","image":"/images/D365-FI.jpg","permalink":"/dynamics-crm-2016-exams/","title":"Dynamics CRM 2016 Exams"}]