This may be somewhat obvious from some of my more recent blog posts, but I am a huge fan of Application Insights at the moment. The best analogy I can use for it is that it’s a web developer Swiss Army Knife, that really puts you in the driving seat of understanding and maintaining your web application for the long term. The analytic tools available on the “knife” – from error reporting, user trends and server performance – provide a treasure trove of information that can help to shape successful software deployments and ensure that developers are properly understanding the impact of their work within a production environment. If you are developing web applications utilising the Azure stack, there really is no good reason not to use it; and, even if you are not, you can still utilise the tool wherever your web application is hosted.

Recently, whilst fiddling with the tool, I was surprised to find another feature which I had overlooked – Availability testing. In simple terms, this enables you to configure proactive monitoring of your web application to make sure it is responding to requests. This can be particularly useful if you have availability targets or SLAs that your web application needs to adhere to. The feature is also tightly integrated alongside the other performance metrics gathered by the service, making it the ideal choice for satisfying any data analytic requirements. I wasn’t aware of the feature until I implemented it recently, so what I wanted to do on this weeks blog post is demonstrate how straightforward it is to set up Availability testing within Application Insights.

Setting up your first Availability Test

What ultimately aids the journey towards configuring for your first test is the simplistic nature of the process; however, there are a couple of settings that may require further explanation. The remainder of this section will go through the steps involved from start to finish. Be sure to have provisioned an Application Insights resource before starting and confirm that your target website is readily accessible over the internet.

Once your Application Insights resource is provisioned, navigate to the Availability tab on the left-hand side of the resource blade:

This will then expand the Availability test summary window, which should be blank as indicated below. Click on the Add Test button to begin creating the test:

There are a couple of basic information fields that need populating at this juncture:

  • Test name: This shouldn’t need explaining 🙂 Keep in mind the importance of having a unique name for the test, particularly if you are planning on utilising the reporting capabilities discussed later on in this post.
  • Test type: You have two options at your disposal here – a simple URL ping test, that contacts the URL you specify and captures the response code from the server, or a Multi-step web test, a more powerful test that carries out a set of predefined steps when crawling through your website. Visual Studio Enterprise is required to put these tests together and there are pricing implications when using it. You can find out more about this option via the Microsoft Docs website. For this example, we are going to the use the URL ping test.
  • URL: This is the exact URL that will be contacted as part of the test. This will either be the root azurewebsites.net domain or your assigned custom domain.

Your Create test window should resemble the below after configuring the above:

The next step is to define the location where your test will be triggered from. If you are hosting your application in an existing Azure region, then it is recommended to select the next nearest geographical region for testing. Multiple regions can also be selected. In the example below, UK West is selected due to the website in question existing in the UK South region:

Next is to define your success/fail conditions for the test. This will vary depending on the type of test you a conducting, but for URL ping test‘s, you can define a timeout period for the test and also determine whether a specific HTTP response code is returned each time the test is executed. A 200 response code is the standard for indicating that a web page has loaded successfully, so this is the one that will be used for this example. You can also define this further by checking for specific keywords in the response text:

Finally, you can designate the alert settings for when a failure is detected. This can range from an email to a specified list of contacts (more on this shortly) through to a webhook notification to an external HTTP/HTTPS endpoint. Example settings are utilised in the below screenshot that should be suitable for most scenarios:

With everything configured, your Create test window should resemble the below if done correctly:

Press OK to create your test, which will begin executing immediately. You will need to wait some time for results to start appearing. For this example, we are attempting to query a website that has a specific deny rule for any non-allowed host, including the Availability test server. This is reflected in the results below, which indicate a 100% failure rate for the selected period:

Failed Test Notifications

As part of the example above, the test is configured to send email notifications out to Azure subscription owners when a fault with the application is detected. An example of how this looks can be seen below:

Whilst the above is useful for application developers/administrations to receive proactive notifications relating to their web applications, it doesn’t provide much from a reporting standpoint. Fortunately, this is where one of the features major benefits come into the equation.

Analysing Previous Availability Tests

Similar to other statistical information gathered by Application Insights, the results of each individual test are logged within Query Analytics as part of the availabilityResults schema. The following query can be used within Query Analytics to return key information for failed tests within a 24 hour period:

availabilityResults
| project timestamp, id, name, location, success, message, duration, appId, appName
| where success == 0 and timestamp > ago(24h)

If you are also wanting to utilise the Continuous Export feature, as discussed previously on the blog, then the great news is that this information is also fully exportable to your destination of choice. A SQL database, Power BI Dashboard, Azure Event Hub…your options are quite limitless 🙂 Below is the query that you can use to extract the key information from your Continuous Export stream:

SELECT 
    availabilityFlat.ArrayValue.testTimestamp AS TestTimestamp,
    availabilityFlat.ArrayValue.testRunId AS TestRunID,
    availabilityFlat.ArrayValue.testName AS TestName,
    availabilityFlat.ArrayValue.runLocation AS RunLocation,
    availabilityFlat.ArrayValue.result AS TestResult,
    availabilityFlat.ArrayValue.message AS Message,
    availabilityFlat.ArrayValue.durationMetric.value AS Duration,
    AR.EventProcessedUTCTime AS AvailabilityResultEventProcessedUTCTime
INTO AvailabilityResultsOutput
FROM AvailabilityResults AS AR
CROSS APPLY GetElements(AR.[availability]) AS availabilityFlat

Conclusions or Wot I Think

The best things in life are those that just keep on giving. My journey with Application Insights to date very much mirrors this. The number of business challenges and scenarios that I have been able to chuck towards it and walk away with an effective and easy-to-implement solution is growing every month. For me, it is slowly becoming the de facto tool to have deployed alongside any new web application. The ability to extend the tool further so that is not just providing ongoing benefit but proactive capabilities, via the Availability feature, is the main area where I feel the tool thrives the most, both now and in the future. Anything that can take the headache out of diagnosing enterprise-grade web application systems, whilst simultaneously driving insight into how a website experience can be improved for end users, wins in my book and Application Insights keeps continually proves itself in this regard.

I started working with Azure Application Insights towards the end of last year, during a major project for an enterprise organisation. At the time, our principal requirement was to ensure that we could effectively track the usage patterns of a website, the pages visited, amount of time spent on each page etc. – all information types that, traditionally, you may turn to tools such as Google Analytics to generate. At the time, what swung the decision to go for Application Insights was the added value that the service provides from a developer standpoint, particularly if you are already heavily invested within .NET or Azure. The ability to view detailed information regarding errors on a web page, to automatically export this information out into Team Foundation Server/Team Services for further investigation and the very expendable and customisable manner in which data can be accessed or exported were all major benefits for our particular scenario. It’s a decision which I don’t think I would ever look back on and, even today, the product is still finding very neat and clever ways of surprising me 🙂

One of the other features that make Application Insights a more wholly expandable solution compared with Google Analytics is the ability to extend the amount of information that is scraped from your website page requests or visitors. These properties will then be viewable within Application Insights Analytics, as well as being exportable (as we will discover shortly). For example, if you are interested in determining the previous URL that a web user was on before visiting a page within a C# MVC website, create a new class file within your project called ReferrerTelemetryInitializer and add the following sample code:

using Microsoft.ApplicationInsights.Extensibility;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.DataContracts;

namespace MyNamespace.MyProject.MyProjectName.MVC.Utilities
{
    public class ReferrerTelemetryInitializer : ITelemetryInitializer
    {
        public void Initialize(ITelemetry telemetry)
        {
            if(telemetry is RequestTelemetry)
            {
                string referrer = HttpContext.Current.Request?.UrlReferrer?.ToString();
                telemetry.Context.Properties["Referrer"] = referrer;
            }
        }
    }
}

Be sure to add all required service references and rename your namespace value accordingly before deploying out. After this is done, the following query can then be executed within Application Insights Analytics to access the Referral information:

requests
| extend referrer = tostring(customDimensions.Referrer)
| project id, session_Id, user_Id, referrer

The extend portion of the query is required because Application Insights groups all custom property fields together into a key/value array. If you are working with other custom property fields, then you would just replace the value after the customDimensions. portion of the query with the property name declared within your code.

If you have very short-term and simplistic requirements for monitoring your website performance data, then the above solution will likely be sufficient for your purposes. But maybe you require data to be stored beyond the default 90-day retention limit or you have a requirement to incorporate the information as part of an existing database or reporting application. This is where the Continuous Export feature becomes really handy, by allowing you to continually stream all data that passes through the service to an Azure Storage Account. From there, you can look at configuring a Stream Analytics Job to parse the data and pass it through to your required location. Microsoft very handily provides two guides on how to get this data into a SQL database and also into a Stream Analytics Dashboard within Power BI.

What I like about Stream Analytics the most is that it utilises a SQL-like language when interacting with data. For recovering T-SQL addicts like myself, this can help overcome a major learning barrier when first getting to grips with the service. I would still highlight some caution, however, and recommend you have the online documentation for the Stream Analytics Query Language to hand, as there a few things that may catch you out. A good example of this is that whilst the language supports data type conversions via CAST operations, exactly like T-SQL, you are restricted to a limited list of data types as part of the conversion.

SQL developers may also encounter a major barrier when working with custom property data derived from Application Insights. Given the nature of how these are stored, there is specific Stream Analytics syntax that has to be utilised to access individual property values. We’ll take a closer look now to see just how this is done, continuing our earlier example utilising the Referrer field.

First of all, make sure you have configured an Input to your Request data from within Data Analytics. The settings should look similar to the image below if done correctly:

The full Path pattern will be the name of your Application Insights resource, a GUID value that can be found on the name of the Storage Account container utilised for Continuous Export, and then the path pattern that determines the folder name and the variables for date/time. It should look similar to this:

myapplicationinsightsaccount_16ef6b1f59944166bc37198a41c3fcf1/Requests/{date}/{time}

With your input configured correctly, you can now navigate to the Query tab and utilise the following query to return the same information that we accessed above within Application Insights:

SELECT 
    requestflat.ArrayValue.id AS RequestID,
    r.context.session.id AS SessionID,
	r.context.[user].anonId AS AnonymousID,
    GetRecordPropertyValue(GetArrayElement(r.[context].[custom].[dimensions], 5), 'Referrer') AS ReferralURL,
INTO [RequestsOutput]
FROM [Requests] AS r
CROSS APPLY GetElements(r.[request]) AS requestflat

Now, its worth noting that the GetArrayElement function is reliant on a position value within the array to return the data correctly. For the example provided above, the position of the Referrer field is always the fifth key/value pair within the array. Therefore, it may be necessary to inspect the values within the context.custom.dimensions object to determine the position of your required field. In the above query, you could add the field r.context.custom.dimensions to your SELECT clause to facilitate any further interrogation.

Application Insights in of itself, arguably, provides feature-rich and expansive capabilities as a standalone web analytics tool. It certainly is a lot more “open” with respect to how you can access the data – a welcome design decision that is at the heart of a lot Microsoft products these days. When you start to combine Application Insights with Stream Analytics, a whole world of additional reporting capabilities and long-term analysis can be immediately unlocked for your web applications. Stream Analytics is also incredibly helpful for those who have a much more grounded background working with databases. Using the tool, you can quickly interact and convert data into the required format using a familiar language. It would be good to perhaps see, as part of some of the examples available online, tutorials on how to straightforwardly access Custom Dimensions properties, so as to make this task simpler to achieve. But this in and of itself does not detract away from how impressive I think both tools are – both as standalone products and combined together.