pexels-photo-122164

Getting to Know Containers

Containers have sparked genuine interest over the last few years. As a developer, I’ve had my fair share of “It Works on My Machine” days, where I spent an interesting amount of my time trying to identify why my code doesn’t run in a given environment. Did I make a mistake? Did someone else make a mistake? Uncertainty, risk and the Human Factor definitely make for adrenaline packed all-nighters.

Over the years, we’ve tried to tackle this by using various approaches. We’ve gone from the traditional ZIP file and paper installation manuals to fully automated Continuous Integration pipelines. Ultimately, we strived to remove ourselves from the equation.

Moving code from one environment to another, has always been a tedious task. Packaging Apps usually doesn’t produce much excitement… To be honest, I’ve never gotten worked up about the process. On numerous occasions, I found myself saying that I’m definitely more excited about the candy bar than it’s wrapper. More often then not, the whole process was time consuming; it threw me out of my happy place (a.k.a the zone). Next, it was time to deploy. The process was error prone and likely to rob me of my weekends.

So, why Containers? why now? And more specifically, why should I get excited?

Containers are special, because they guarantee that our application will always run the same way, regardless of its environment. It does this by wrapping it in a complete filesystem, that contains everything that is needed for it to run. This includes our code, runtime, system tools, system libraries and anything that can be installed on a server.

Ok… So Containers are packages?

Yes, and much more! Think back, and remember the days where parts of your solutions conflicted with each other. Some of us had a special name for it; I called it DLL Hell. It required me to spread my solution across multiple Servers/Virtual Machines and ended up bloating our Cost of Operations. When Azure was first released, Cloud Services took the one service instance to one Virtual Machine instance strategy. This definitely has its benefits. It is easy to grok, and it makes distributed computing accessible to most developers.

Then comes the reality check!

Compute isn’t cheap and there’s nothing worse than underutilized resources in a pay-per-use model. Think about it for a moment… It’s really hard to justify the costs associated to a set of Cloud Services, whose CPU, RAM and Bandwidth are barely utilized during peak loads. How can we make use of what we pay for?

Balancing business constraints and resource utilization is key

The answer is Compute Consolidation. The concept is interesting on paper. Its implementation in Cloud Services requires us to be creatively meticulous, because we needed to think about packaging multiple services together. This adds complexity which is leaked straight through to operations. Deployments become difficult, because an update to a single service means that we needed to package and redeploy everything.

Containers are here to make our lives easier in terms of DevOps. They allow us to run applications and services, that normally would conflict with each, on the same machines (be it physical or virtual). This is accomplished by adding a virtualization layer above the Operating System. This makes it possible for us to share the Operating System’s kernel between multiple Containers.

Containers running on a single machine share the same Operating System kernel and they start instantly. Images are constructed from layered filesystems and share common files, making disk usage and image downloads much more efficient.

Benefits (Why Containers?)

  • Fast, simple and consistent delivery of applications/services
  • Run everywhere on premise (Virtual Machines or Bare Metal), in the cloud or in datacenters
  • Sharing hardware between more workloads
  • Performance impacts are very low
  • Better use of resources
  • Enables developers to work in consistent environments and eliminates configuration drift.

Scenarios (When do They Make Sense?)

Containers take our complicated deployment documents, littered with steps, and boil them down to something that is manageable, for many this is known as Configuration as Code and DevOps. By packaging an application, with all of its dependencies, into a standardized unit of software development, Containers are useful for many scenarios.

  • Microservices – are software solutions composed of many small and independent services. Containers and orchestration tools allow us to support these complex deployments.
  • n-Tier application/services – are solutions where logic is segmented in multiple tiers. Usually they consist of a presentation tier, a business logic tier and a data tier.
  • Continuous Integration and Continuous Deployment (DevOps) – consists of automating deployments and release management across many environments. The goal here is to optimize our build, test and deployment processes such that we can deploy multiple times a day.
  • Scale-Out friendly workloads – are mostly composed of stateless workloads that once duplicated, work together to accomplish a common goal.
  • Sandboxing applications/services – is used to isolate a process and its dependencies. This allows us to make changes to our execution environment, without impacting other applications or services that run on the same machine.
  • Local development – applications and services can heavily depend on the environments that they run. Some software requires specific Environment Variables, Registry Keys and machine-wide configurations which often results in configuration drift between environments. Moving these solutions between environments is challenging and makes us wish that we could ship the developer’s machine to production. This is a scenario where Containers make sense, because that’s exactly what they allow us to do.

Leveraging Containers for .NET Core Development

Getting started with Containers requires us to go through a few steps on Windows 10. Note (November 2016). Use the Beta Channel for Windows Container Support.

Docker on Windows uses a very small Linux Virtual Machine that runs on Hyper-V to host Linux Containers.

We can use the Docker based images found on hub.docker.com/r/microsoft/dotnet to set up our dev or testing environments. In the context of this blog post, I decided to leverage Containers for test and deployment. And I installed .NET Core on my developer machine and used VS Code as my editor. Please note that the Docker Tools for Visual Studio can also be used to build and package .NET CORE applications.

Once we have our environment setup, we can use VS Code‘s integration terminal and the .NET Core command-line (CLI) tools to scaffold an ASP .NET Core project. These tools can be used for building both .NET Core apps and libraries.

dotnet new -t web
dotnet restore

At this point, we have a fully functional ASP.NET MVC application and it’s time to test it locally.

dotnet run

This command launches Kestrel a web server for ASP.NET Core based on libuv. The web app is generally accessible via http://localhost:5000.

Getting this far this quickly is absolutely exciting, because we got here with a very small amount of effort. But there’s a problem, we can’t ship developer laptops to the cloud. So let’s package our app in a Docker Container.

Let’s start by publishing the app.

dotnet publish

Hit F1 and type install Extensions. Search and install the Docker Support extension. Hit F1 again and type Docker: Add docker files to workspace. Then select .NET Core and provide a port for the application. The tool will create a few Docker specific files. The first of these files if a dockerfile

FROM microsoft/aspnetcore:1.0.1
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source . 
ENTRYPOINT ["dotnet", "CoreDemo.dll"]

The second file we get is a docker-compose.yml. It is used to define a multi-container Docker application. It is used to

version: '2'
services:
  coredemo:
    image: coredemo:latest
    build:
      context: .
      dockerfile: dockerfile
    ports:
      - 80:80

Applications are typically composed of many services. This results in an architecture that is commonly referred to as Microservices.

The third file that was generated is a docker-compose-debug.yml used for debugging.

version: '2'

services:
  coredemo:
    build:
      args:
        source: obj/Docker/empty/
    labels:
      - "com.microsoft.visualstudio.targetoperatingsystem=linux"
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - DOTNET_USE_POLLING_FILE_WATCHER=1
    volumes:
      - .:/app
      - ~/.nuget/packages:/root/.nuget/packages:ro
      - ~/clrdbg:/clrdbg:ro
    entrypoint: tail -f /dev/null

Be sure to update the dockerfile and point the source argument that is used to copy the optimized application from .\bin\Debug\netcoreapp1.0\publish\ to the container working directory.

FROM microsoft/aspnetcore:1.0.1
ARG source=bin/Debug/netcoreapp1.0/publish/
WORKDIR /app
EXPOSE 80
COPY $source . 
ENTRYPOINT ["dotnet", "CoreDemo.dll"]

Use the following command to build the Container image.

docker-compose -f docker-compose.yml build

Let's list the images that are local to our machine, to make sure that everything is in order.

docker images

REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
coredemo               latest              817a844abd8a        15 minutes ago      284.7 MB
microsoft/aspnetcore   1.0.1               f8cd5e1a2585        2 days ago          266.8 MB

The next step for us is launch the Linux Container on our Windows 10 machine.

docker run -it -d -p 80:80 coredemo:latest

At this point we have a running Container, that is routing traffic from localhost port 80, to the Container's port 80 and finally to our ASP.NET Core application.

Using the following command will list our Containers

docker ps

CONTAINER ID        IMAGE               COMMAND                 CREATED             STATUS              PORTS              NAMES
c0f97bc3adb3        coredemo:latest     "dotnet CoreDemo.dll"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp desperate_kalam

Finally, it's time to destroy the Container and release its resources.

docker kill c0f97bc3adb3

Container Patterns and Practices

  • Container Images are Immutable – to update software, teardown the existing Container and replace it with a Container based on an updated Image. 
  • Container Images are Pre-Optimize – it is normal to regularly start and kill Containers. When a Container starts it has to be ready to go. This means that we need to optimize and pre-compile our applications in order to minimize the Container’s startup time. 
  • Keep ’em lean – Containers should contain only the bare minimum to run the application/service. Use layers and create your own base container if you need to. Use .dockerignore to remove any source code, logs and other undesirables before creating the image. 
  • Create images from a Dockerfile – images can be created using the docker commit command. This may be tempting, but the resulting image is non-reproducible. This means that you cannot destroy and recreate the image at will. This characteristic is desirable when working in a DevOps environment. The dockerfile allows us to make changes to the configuration as code container definition, build a brand new image and replace the existing container with an updated version. 
  • Containers are not Virtual Machines – in the Docker world don’t treat a Container like a Virtual Machine. Although we can open a terminal within a Container, it should only be done for debugging purposes. If we need Virtual Machine like support it, we should look at LXC/LXD, KVM Containers. 
  • Containers are Ephemeral – data stored within a container is meant to be lost at anytime. Design Containers so they can be destroyed and recreated at anytime and without side effects. 
  • Containers should run at most one process – the philosophy is to run at most one process per container. If the application/service requires multiple processes then each process should be placed in its own container. Run the process in the foreground; if the process crashes, the Docker Container crashes, and it should be recreated by Docker, and not by a process manager inside the Container. A dockerfile has one CMD and ENTRYPOINT. Often, the CMD will use a script to perform configurations on the image. Temptation may push us to try to start multiple processes from within that script. Don’t! Doing so makes it much harder to manage and update individual processes. 
  • Use Data Volumes to persist statedata volumes are designed to persist data, independent of the container’s life cycle. Docker therefore never automatically deletes volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container. Furthermore, changes made to a data volume will not be included when you update an image. 
  • Leverage external Storage Services/Solutions – on Azure, there are many storage offerings. From the Azure Martketplace, we can leverage solutions like SoftNAS and Red Hat Gluster Storage. Native to Azure, we have Azure Files. A fully managed file shares that use the standard SMB 3.0 protocol. 
  • Logs don’t belong in Containers – since Containers are ephemeral, logs should be stored or streamed to external storage services. On Azure, we can leverage Azure Storage, Azure Application Insights, and Azure Log Analysis
  • Refer to Containers via hostnames or service links – docker creates an overlay network and let’s us use service and container names as hostnames for service discovery. Service links create environment variables which allow containers to communicate with each other within a stack, or with other services outside of a stack. These mechanisms are important, because Container environments are dynamic in nature and will reorganize themselves if managed by an orchestrator. 
  • Only use docker exec when attaching a shell – the docker exec -it {cid} bash command starts a new command in a running container. This is useful for attaching a shell for debugging purposes. 
  • Maintain Containers in a registry and use tags – if no tags are used, docker users latest as the default tag. This is problematic, because we lose track of versioning. If we take a dependency on microsoft/aspnetcore:latest instead of microsoft/aspnetcore:1.0.1 we lose control our environment configuration. Every time we rebuild our image, it will pull the latest microsoft/aspnetcore that may contain breaking changes. Consequently, our CI/CD pipeline will fail and it will be difficult to understand what changed. Furthermore, the same recommendation goes for our own Container images, we should be tagging them effectively. 
  • Use the same images from dev to production environments – using different images or event tags in dev, test, staging and production environments will cause an impedance mismatch. Dev, System integration and Q&A testing should be done on the image that will be pushed to production. 
  • Secrets don’t belong the Dockerfile – use CMD or ENTRYPOINT to specify a script that will pull the secrets like credentials from a third party and then configure your application. On Azure secrets can be protected by Azure Key Vault
  • Run Containers with least privileges – don’t run containers with elevated privileges. This protects us from the possible side effects of having a compromised container on a host. Use the USER instruction to switch between users. And try not to switch back and forth too much as this can contribute to an increased number of layers. 
  • Containers should not expect other Containers to exist – often distributed applications/services rely on Containers to be started in a certain order. For example, a database container must be running before the middle tier and the web front end. Container environments are dynamic environments, and Containers are killed and started regularly. This is not an if this is more of a when type of question and our applications/services should be resilient to these events. Do not attempt to use wait-for scripts in the dockerfile to start Containers in a specific order. The resulting application/service will be fragile to its physical environment and will be hard to predict.  

Container Orchestration?

One process per Container rapidly leads to a considerable number of Containers per solution. This is where orchestration comes into play. Orchestrators aren’t necessary for simple, single Virtual Machine Container environments. They definitely become a necessity when we start thinking about Virtual Machines as a pool of resources. With this mindset, Virtual Machines become scale units that allow us to implement scale-out (reads elastic) cloud infrastructures that permit us to scale to demand.

Docker Swarm is a Container Orchestrator, and it’s not the only one. The few that stand out are Kubernetes, Mesosphere Marathon, RedHat OpenShift, CloudFoundry, CoreOS Fleet and Service Fabric. These Orchestrators all play nicely on-premise, but we’re often interested in Containers because we want to move away from owning and maintaining large physical infrastructures. This is where Azure Container Services (ACS) and Azure Service Fabric comes into play.

Azure Container Service makes it simple for you to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications. Container Service uses an optimized configuration of popular open-source scheduling and orchestration tools (Kubernetes, Mesosphere Marathon and Docker Swarm). This lets you use your existing skills or draw upon a large and growing body of community expertise to deploy and manage container-based applications on Microsoft Azure.

Kubernetes, Mesosphere Marathon and Docker Swarm approach Container orchestration from different angles and are well worth the exploration. Picking a flavor will usually boil down the type of solutions that we wish to deploy. Some tools are more IT oriented, while others are more developer focused.

Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable Microservices. Service Fabric also addresses the significant challenges in developing and managing cloud applications. Developers and administrators can avoid complex infrastructure problems and focus on implementing mission-critical, demanding workloads that are scalable, reliable, and manageable.

Service Fabric takes a different approach, because it does not require us to set up a separate service (Zookeeper/Consul/Etcd) for managing cluster availability. Service Fabric itself is a set of Microservices. Being born in the cloud, High Availability, Resiliency, Durability and most of the bilities are built-in and not bolted-on.

This being said, choosing an Orchestrator is a serious decision and needs to be made for the right reasons. Take time to review and choose the one the best fits the needs of your solution, your organization and your team’s capabilities.

Still Trying to Decide if Containers are for you?

Use These Resources to Get Started Quickly

Share your thoughts in the comments below

7 responses to Getting to Know Containers

  1. 

    I’m struggling to understand whether containers would help in our situation:

    Our C# apps consist of a client (WPF with MVVM) connecting to services over WAS that make use of a data or repository layer. Users log in to one of our terminal servers and all clients then connect to a single server over WAS to the relevant service(s) for the programs they’re running depending on what job they have to do.

    So in terms of code for a particular client program we’ll have a client project, a service project, a data project and usually a common project to hold business objects.

    To deploy new versions of applications we literally have an overnight process that copies out new versions of either client or service DLLs from any builds that may have happened that day to the relevant servers.

    Do containers make sense in this environment and, if so, how?

    Like

  2. 

    Really enjoyed this complete overview.

    Special mention to:

    Its implementation in Cloud Services requires us to be creatively meticulous, because we needed to think about packaging multiple services together. This adds complexity which is leaked straight through to operations.

    Like

  3. 

    Excellent! not just an overview, but a practical example too!

    I have been trying to stitch together several posts/articles to try to get a practical view of containers (I guess from a windows-centric .NET perspective), and it hasn’t been that easy actually! A lot of articles where unfortunately it feels like someone has just “got it” and then tried to do the usual “I will explain in a complex way to make it sound clever” !! :) sorry, a bit harsh perhaps, but again, thank you for making it so easy to follow and with lots of resources too.

    Really appreciate it.

    Liked by 1 person

  4. 

    Next is microservices – which I think you have an articles or two on…

    I am trying to understand that fact I can create say a very simple WebApi to do “one thing”, or I can create an Azure function (AWS Lambda) or I can use Azure Service Fabric and follow that whole architectural pattern of input queue / process etc. (sure it’s more than that of course!).

    Thanks again!

    Like

Trackbacks and Pingbacks:

  1. Dew Drop - November 23, 2016 (#2370) - Morning Dew - November 23, 2016

    […] Getting to Know Containers (Alexandre Brisebois) […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s