What is it?
Windows Azure Cache is a distributed, in-memory, scalable solution that can be used to build highly scalable and responsive applications by providing super-fast access to data. A cache created using the Cache Service is accessible from applications within Windows Azure running on Windows Azure Web Sites, Web & Worker Roles and Virtual Machines.
Why is it Important?
On September 3rd 2013, Microsoft announced a new Windows Azure Cache Service. This is actually quite interesting for projects that requires a Highly Available (HA) Cache. It allows you to create a Cache Cluster using the management portal. This means that all infrastructure maintenance and resource management is Microsoft’s responsibility.
Caching on Windows Azure can also be done using In-Role Cache which uses RAM from Role instances to create a Memcache like experience. Each Role instance contributes a certain amount or RAM to the cache cluster. The down side to this method, is that it eats away at resources that could be used by the OS and or by your Cloud Service application.
Another way to create Cache Clusters on Windows Azure, is to deploy In-Role Cache on instances who are completely dedicated to caching. It’s usually deployed on Medium sized Virtual Machine instances and also requires a minimum of 3 instances at all times.
Both of these options are great and provide good solutions. But keep in mind that in both circumstances, you’re responsible for scaling resources in order to satisfy peak loads.Furthermore, any changes to the Cache Cluster configurations requires a redeployment of the Cloud Service.
The new Windows Azure Cache Service released as a preview, opens doors to interesting scenarios where it will be possible to adjust and administer the Cache Cluster through the Windows Azure Management Portal. Changing the size of your cache won’t require you to redeploy your Cloud Service. Furthermore, this service makes your Cache available to other hosted applications like Windows Azure Websites and Virtual Machines. Essentially this means that a Cloud Service and a Web Site could share the same Cache. Imagine the possibilities!
Some might say that the pricing is a bit steep, granted… But if you look at what it costs you to build a Highly Available Cache Cluster using the available alternatives, in some circumstances you might come to the conclusion that the new Windows Azure Cache Service is cheaper.
To top it off, the Windows Azure team has made sure that you didn’t need to make any code modifications in order to switch from using a In-Role Cache to using the Windows Azure Cache Service. They’ve also made it easy for you to request access to the preview, so to take it out for a spin and let me know what you think of this new service.
The new service differs from the old Windows Azure Shared Caching Service
- No transaction limits. Customers now pay based on cache size only, not transactions
- Dedicated cache. The new service offers dedicated cache for customers who need it.
- Better management. Cache provisioning and management is done through the new Windows Azure Management Portal rather than from the old Microsoft Silverlight–based portal.
- Better pricing. The new service offers better pricing at nearly every price point. Customers using large cache sizes and high transaction volumes gain a major cost savings.
Benefits of the Windows Azure Cache Service
- Ability to use the Cache Service from any app type (VM, Web Site, Mobile Service, Cloud Service)
- Each Cache Service instance is deployed within dedicated VMs that are separated/isolated from other customers – which means you get fast, predictable performance.
- There are no quotas or throttling behaviors with the Cache Service – you can access your dedicated Cache Service instances as much or as hard as you want.
- Each Cache Service instance you create can store (as of today’s preview) up to 150GB of in-memory data objects or content. You can dynamically increase or shrink the memory used by a Cache Service instance without having to redeploy your apps.
- Web Sites, VMs and Cloud Service can retrieve objects from the Cache Service on average in about 1ms end-to-end (including the network round-trip to the cache service and back). Items can be inserted into the cache in about ~1.2ms end-to-end (meaning the Web Site/VM/Cloud Service can persist the object in the remote Cache Service and gets the ACK back in 1.2ms end-to-end).
- Each Cache Service instance is run as a highly available service that is distributed across multiple servers. This means that your Cache Service will remain up and available even if a server on which it is running crashes or if one of the VM instances needs to be upgraded for patching.
- The VMs that the cache service instances run within are managed as a service by Windows Azure – which means we handle patching and service lifetime of the instances. This allows you to focus on building great apps without having to worry about managing infrastructure details.
- The new Cache Service supports the same .NET Cache API that we use today with the in-role cache option that we support with Cloud Services. So code you’ve already written against that is compatible with the new managed Cache Service.
- The new Cache Service comes with built-in provider support for ASP.NET Session State and ASP.NET Output Page Caching. This enables you to easily scale-out your ASP.NET applications across multiple web servers and still share session state and/or cached page output regardless of which customer hit which server.
- The new Cache Service supports the ability to either use a separate Cache Service instance for each of your apps, or instead share a single Cache Service instance across multiple apps at once (which enables easy data sharing as well as app partitioning). This can be very useful for scenarios where you want to partition your app up across several deployment units.