Microsoft Azure has a lot of options when it comes to provisioning environments and resources. In my last post about troubleshooting Virtual Machine allocations, I enumerated a few of these options and briefly mentioned that I favored destroying environments to shutting them down. In this post, I will try to give more context around my preference.
Build it, Use it, Destroy it!
This concept goes against everything that we work so hard to do. When we invest time, money and man power in our projects, we want these investments to last. Consequently, we work hard to preserve and protect our investments. It’s expected and is definitely a challenge to express the need to destroy things when we’re done with them.
The Journey
Adapting to this new practice is certainly a journey. It doesn’t apply to everything we do, and it should definitely be considered for temporary or for recurrent workloads.
The thinking behind this approach is that we can use automation to build and configure environments in a repeatable way. Azure will look for the best place to create the new resources and will provision them. Once they’re ready, PowerShell DSC or SSH is used to move data (state) to the Virtual Machine and composes the desired configuration by installing applications, containers, services and daemons. At this point, we can execute our compute workloads.
Before we destroy the environment, we use automation to persist artefacts, configuration and desirable state to Azure Storage. This extra step opens up many opportunities that we can benefit from. For example, we can use the artefacts that were produced by our workload as a starting point for other workloads. We can create many identical environments to experiment or build and run hyperscale workloads.
While we’re going down this path for the first time, we may encounter questions like, why do we put so much effort in automation? Or like, why does this investment make sense? These questions are important and help us find out whether we’ve chosen the right strategy to meet our goals.
These questions also arise, because our goal is generally to go as fast as possible. The false impression that it’s faster to do things manually than to use automation, is often a result of this need for speed. What we fail to notice, is that we sometimes waste many cycles trying to debug and align the stars precisely in the right order for our workloads to behave.
This is where automation helps us go fast. Our environments are complex and require us to go through many error-prone steps before we get to try something new. Scripts and Configuration as Code allow us to accelerate the dev and test cycle. And allows us to try more combinations. Best of all, we have a clear and unambiguous record of everything that is required to provision our environments.
Storing Script and Configuration as Code in Source Code Control Systems creates a history that can be used to understand how we got where we are. This history is especially useful when workloads that are handed to multiple individuals over their lifetime. Without it, they become fragile, because we don’t have a good comprehension, and it leads us to fear change. This is perfectly natural, especially when consequences aren’t apparent or easy to understand.
Build it, user it, destroy it really makes sense in scenarios where we need to provision environments more than once. And this is true in more scenarios than we may think. For example, deploying an update across Dev, QA and Prod environments definitely counts as being recurrent.
This brings us to the end of our journey. Today, I prefer building, using, and destroying environments to shutting down and de-allocating Virtual Machines on Azure. One of the benefits I get from this approach, is that I don’t have to worry as much about resource allocation challenges, which can arise from starting large Virtual Machines that have been de-allocated for a certain amount of time. And above all, it provides me with a lot of agility and understanding.
Now things are just the way I like them to be, predictable. Use the comments below to share your thoughts and experiences with this approach.
Automation Options
- Automate with PowerShell to provision a new Windows Virtual Machine and use Remote PowerShell to bring it to a desired state.
- Use Azure CLI to provision a new Linux Virtual Machine and use SSH to bring it to the desired state.
- Leverage Azure Resource Manager to provision a new Windows Virtual Machine and use Desired State Configuration (DSC) for the final steps.
- Use Azure Resource Manager to provision a new Linux Virtual Machine and use SSH execute custom commands to provision the workload.
- Use Chef, Puppet or Salt Stack to provision and configure new Virtual Machines.
- Reduce complexity related to configuration by using Docker or Azure Container Services (More about Containers)