Archives For Logging


With the latest update of the Semantic Logging Application Block – Out-of-process Service NuGet Package, the user under which the Windows Service is executed has changed. In hopes to save you countless hours of debugging I am sharing the configurations that should be used on Azure Cloud Services.

When you start the Out-of-Process service be sure to specify the LocalSystem account.

SemanticLogging-svc.exe -s -a=LocalSystem

Failure to execute under the right account will prevent the Out-of-Process service from logging Events to Azure Table Storage.

What is SLAB?

The Semantic Logging Application Block (SLAB) provides a set of destinations (sinks) to persist application events published using a subclass of the EventSource class from the System.Diagnostics.Tracing namespace. Sinks include Azure table storage, SQL Server databases, file, console and rolling files with several formats and you can extend the block by creating your own custom formatters and sinks. The console sink is part of this nuget package. Other Sinks mentioned above are available as separate nuget packages. For the sinks that can store structured data, the block preserves the full structure of the event payload in order to facilitate analysing or processing the logged data.


HTTP/1.1 503 Service Unavailable

On Azure we must design with cost of operation and service constraints. I recently had an interesting event where my REST Service, deployed on a small (A1) cloud service instance, started to respond with HTTP Status Code 503 Service Unavailable.

The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response. [Source: HTTP Status Code Definitions]

Faced with this interesting challenged I started looking in the usual places, which include SQL Database metrics, Azure Storage Metrics, application logs and performance counters.

Throughout my investigation, I noticed that the service was hit on average 1.5 million times a day. Then, I noticed that the open socket count was quite high. Trying to make some sense out of the situation, I started identifying resource contentions.

Looking at the application logs didn’t yield much information about why the network requests were piling up, but it did hint at internal process slowdowns following peak loads.

Working on extracting logs from Azure Table Storage I finally got a break. I noticed that the service was generating 5 to 10 megabytes of application logs per minute. To put this into perspective, the service requires enough IO to respond to consumer requests, to push performance counter data to Azure storage, to persist application logs to Azure storage and enough IO capacity to satisfy the application’s need to interact with external resources.

Base on my observations I came to the conclusion that my resource contention was around IO. Now, I rarely recommend scaling up, but in this case it made sense because the service move lots of data around. One solution would have been to turn off telemetry. But doing so would have made me as comfortable as if I were flying a jumbo jet with a blindfold on. In other words, this isn’t something I want to consider because I believe that telemetry is crucial when it comes to understanding how an application behaves on the cloud.

Just to be clear, scaling up worked in this situation, but it may not resolve all your issues. There are times when we need to tweak IIS configurations to make better use of resources and to handle large amounts of requests.Scaling up should remain the last option of your list.

Related Posts

 


Lessons Learned – On #Azure, Log Everything!


Log everything, I mean it! If it wasn’t logged, it never happened.

So you’re probably thinking “doesn’t logging tax an application’s performance?” Absolutely, some logging frameworks can even bring down your application under load. So you really need to be careful about what you log. Therefore, we should log everything that can help us figure out what went horribly wrong. I used the past tense, because on the cloud it’s normal to fail. Never build an application without thinking of Mean Time to Recovery. In other words, how will you recover and how long will it take. Sometimes, when you start on a fresh project, you can only plan for the obvious cases. That means that we have a lot to learn from a brand new application. So in order to facilitate this learning process, I urge you to log anything that can help the DevOps that will get a phone call late at night. Who knows… you might be the one supporting this application.

Take a moment and think about your system. What would you need to know, to be able to identify what went wrong?

Running applications on the cloud without meaningful logs is like flying an airplane without windows or instruments. We know we’re going somewhere… well we think we’re going somewhere. But really where are we going? Is the engine on? Are we climbing or descending? How high are we?

Are these questions making you uneasy?

Let’s think about our applications, do we really know what’s going on? Sure we have performance counters around CPU, Network, Memory and Disk utilization. But what kind of information does that really provide about our application? Knowing that a Virtual Machine (VM) is running, is like being on an airplane without knowing where we’re going. Having meaningful logs provides us with the insights required to know where we’re going. Continue Reading…


Logging in Windows Azure can be done through Windows Azure Diagnostics. This solution collects a ton of detailed data that can be hard to parse through. I recently needed a close to real-time trace of what my Roles were doing. My current project has many instances with many independent services running in parallel, resulting in a challenge when I try to trace using Windows Azure Diagnostics. Log4Net and Enterprise Library offer amazing tools to accomplish what I’m after. But they do so with so much detail and data, that we often need to resort to parsing tools and third party applications to extract meaningful information. I needed something quick, lightweight and that didn’t cost too much to operate.

At first, I was trying to follow what my instances were up to using the Windows Azure Compute Emulator. This wasn’t what I was looking for, because local environments don’t run exactly like the production or staging environments on the cloud. I spent a few minutes thinking about logging and costs related to Windows Azure Storage transactions and came up with the solution described below.

The code from this Post is part of the Brisebois.WindowsAzure NuGet Package

To install Brisebois.WindowsAzure, run the following command in the Package Manager Console

PM> Install-Package Brisebois.WindowsAzure

Get more details about the Nuget Package.

A sample project containing the log viewer can be found on GitHub repository ”Windows Azure Logger” .

Continue Reading…