Brownfield projects are living organisms and should be treated as such. There’s no use getting mad about what was done. What is, needed to be. What needed to be, was.
Software by nature isn’t flawless, we need to accept this and move on to the real challenges at hand. How can we maintain the delicate balance of the production software while we solidify it?
Find the smallest modification from which it would benefit the most.
Just for fun, lets imagine replacing an entire bridge without interrupting traffic. Sure, its a challenge, but we can get it done by taking a step back and devising a strategy. In reality, the bridge doesn’t need to be replaced all at once. We can replace it piece by piece with little interruptions. Eventually, the new bridge will be in place.
Improving the quality of software isn’t that different from replacing a bridge. Let’ take a look a the following Web Site. It’s composed of a Windows Azure Web Site and a single Windows Azure SQL Database.
In this Web Site’s architecture, the Windows Azure SQL Database is the bottleneck, because it doesn’t scale like the Windows Azure Web Site. The first thing we want to do is to optimize its performance. So we start by looking at the effectiveness of the database indexes. Then we use this information to target our optimization efforts.
In this Windows Azure Web Site solution, we get the biggest bang for our buck from tweaking the database indexes.
Now that the database is optimized, we can start looking at solidifying the solution. We start by putting the Windows Azure SQL Database on a diet. In other words, let’s extract the non-relational data from the database. This data probably doesn’t change very often and could be stored in Windows Azure Storage.
Deciding on the right Windows Azure Storage flavor comes down to considering the three Vs.
- Volume – how big is the data?
- Velocity – at what rate is the data coming into the system?
- Variety – is the data homogeneous?
With the answers to these questions, we can start to devise a strategy. Moving data out of the database will probably generate some complexity. But, I’d like to point out that once the data is out of the database, we will probably have more flexibility for things like changing schemas or mass updates.
This Web Site feels more like a Cloud Solution because we’re taking advantage of services like Windows Azure Storage. Because the data in this service replicated at a minimum of 3 times. It takes away lots of possible headaches.
To be fair, Windows Azure SQL Database does the exact same thing. But the Windows Azure Storage Services provide us with some extra scalability that allows us to handle much more load.
Each service provided by Windows Azure has its own set of performance targets. As of January 2014, Windows Azure SQL Database is capable of handling a maximum of 180 simultaneous connections before it starts to apply throttling.
We can optimize the Web Site’s architecture by reliving pressure from the database and other external services . This can achieved by implementing a caching strategy. I strongly recommend looking at the Windows Azure Cache Service, because it can be accessed from anywhere.
To augment the Web Site’s availability, we can use the Windows Azure Traffic Manager. This service allows us to deploy the solution to multiple Windows Azure data centers and provides the ability to failover from one data center to another. This is extremely practical, because it allows the Web Site to remain online even if one of the data centers is unavailable.
By going through this exercise we find the treasures hidden within our Windows Azure Solution. To name a few, we have augmented the availability, scalability, durability, flexibility and the overall performance of the Web Site. Yeah, I know most of these are buzz words. So let me reflect on them for a brief moment.
- Availability – By using the Windows Azure Traffic Manager we can now shift traffic away from deployments who are experiencing issues. We can also exploit the auto scaling functionality built-into Windows Azure Web Sites without being afraid to cripple our SQL Database.
- Scalability – By distributing load over multiple Windows Azure services we’ve dramatically raised the amount of requests that can be processed by our Web Site. Furthermore, much of our data is now automatically scaled out by Windows Azure based on demand.
- Durability – By exploiting various Windows Azure services, our data is replicated. We can also exploit the fact that Windows Azure Blob Storage can be versioned. This is an amazing feature that we rarely think of using. Think about it, we can rollback to previous versions of the data and its built-in!
- Flexibility – By breaking our sole dependency on SQL Database we are free to duplicate data and change its representation. We can use patterns like the Materialized View to pre-process and pre-package data for our end users. Doing so will greatly reduce the overall pressures generated by our end users.
Think of all the treasures hidden away in your Windows Azure solutions. Can you spot them? Take a look at these Cloud Design Patterns, they’re probably going to help you find more.