Archives For November 30, 1999


HTTP/1.1 503 Service Unavailable

On Azure we must design with cost of operation and service constraints. I recently had an interesting event where my REST Service, deployed on a small (A1) cloud service instance, started to respond with HTTP Status Code 503 Service Unavailable.

The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response. [Source: HTTP Status Code Definitions]

Faced with this interesting challenged I started looking in the usual places, which include SQL Database metrics, Azure Storage Metrics, application logs and performance counters.

Throughout my investigation, I noticed that the service was hit on average 1.5 million times a day. Then, I noticed that the open socket count was quite high. Trying to make some sense out of the situation, I started identifying resource contentions.

Looking at the application logs didn’t yield much information about why the network requests were piling up, but it did hint at internal process slowdowns following peak loads.

Working on extracting logs from Azure Table Storage I finally got a break. I noticed that the service was generating 5 to 10 megabytes of application logs per minute. To put this into perspective, the service requires enough IO to respond to consumer requests, to push performance counter data to Azure storage, to persist application logs to Azure storage and enough IO capacity to satisfy the application’s need to interact with external resources.

Base on my observations I came to the conclusion that my resource contention was around IO. Now, I rarely recommend scaling up, but in this case it made sense because the service move lots of data around. One solution would have been to turn off telemetry. But doing so would have made me as comfortable as if I were flying a jumbo jet with a blindfold on. In other words, this isn’t something I want to consider because I believe that telemetry is crucial when it comes to understanding how an application behaves on the cloud.

Just to be clear, scaling up worked in this situation, but it may not resolve all your issues. There are times when we need to tweak IIS configurations to make better use of resources and to handle large amounts of requests.Scaling up should remain the last option of your list.

Related Posts

 


ads_longpollA little while ago the Windows Azure team introduced the message pump pattern to Service Bus Message Bus SDK and it got me thinking about how I could leverage this from any web enabled client.

I wanted to create a long polling REST service that would allow me to extend the message pump pattern over a protected REST service. This is my proof of concept and should not be used in production without a little exploration.

A long poll means that a client makes a request to the server. The server then delays the response until an event occurs. Keep in mind that this is especially useful when you want to communicate events to clients through firewalls. Furthermore, this is a great way to reduce the number of requests made by clients. On the other hand, long poll requests eat up resources, therefore you will need to load test your endpoints in order to identify their limits.

Continue Reading…


Crime-Control-Security-Services-23Finding the problem, of controlling access to specific resources for a certain amount of time and for a certain amount of downloads, to be quite interesting. So I decided to design a solution that would take advantage of the services made available by the Windows Azure platform.

The Challenge

Create single use URIs that invalidate after 15 minutes.

Why?

Companies that sell digital products often try to limit the number of downloads per purchase. Consequently, discouraging customers from sharing their download links becomes a priority. By creating public URIs constrained by a time to live and by a limited number downloads, can help accomplish this goal.

The Concept

Building on the Valet Key Pattern explored in “Keep Your Privates Private!” I used Shared Access Signatures to create a time to live for all public URIs. Then to control access to the actual resources I created a REST service using Web API, which tracks individual access to each URI using the Windows Azure Table Storage Service.

Continue Reading…


A simple filter bringing caching options, similar to MVC’s "OutputCacheAttribute" to Web API ApiControllers. This code was inspired by Filip W’s aspnetwebapi-outputcache. I found his blog post and started playing around with his code. I rapidly found myself asking for more flexibility and for a different behavior. I needed to cache my service responses until a specific moment in time. In my case, I need to cache until 17h45 every day.

Get the code @ https://github.com/brisebois/aspnetwebapi-outputcache

This code has been pulled into filipw / AspNetWebApi-OutputCache and is available through Nuget, run the following command in the Package Manager Console.

PM> Install-Package Strathweb.CacheOutput -Pre

Continue Reading…