Disclaimer: This article is not a sales pitch for Amazon EC2 Container Service. Underground Elephant happens to run a lot of our tech on AWS. I’m sure other container schedulers work just fine.
I’m writing this article today to convince non-believers that container services are here to stay and that companies can realize immediate benefits by adding them to their stack.
It deploys all of those containers that have become increasingly popular over the last few years.
Clusters are groups of EC2 instances on which containers, described as Services, can be deployed. In order to create these services, a cluster must be present.
Recently, Amazon released their own container repository service. You store your docker container images here. Tag these with the version number of whatever awesome containerized application you’ve built.
Once you’ve stored your first image, it’s time to create a task definition. Task Definitions define virtual hardware requirements including port mappings for your docker container images. The container image is specified here. When updating an application running on ECS you create a new version of the service’s task definition with the container image field updated to point to the container image hosting the new version of the application.
Now that the runtime requirements for a single image have been defined we create a Service. Services specify the desired number of container instances of the task definition that you want running at any given time, and optionally allow for an Elastic Load Balancer to route traffic to these container instances. You can even configure the ELB to do the health checks for the service.
If everything goes well, your container images should now be deployed in your cluster. Congratulations!
Debugging applications running on a container manager is a little different from debugging applications with their own dedicated EC2 instances.
SSH into a cluster instance hosting the container instance you want to debug.
Run “docker ps” to view running container instances. Find your container id. Use that to log into the shell of a container instance using docker exec -it /bin/sh or something similar. The “docker logs” command might also be helpful here. Container instances can also mount their log volumes onto the host cluster instance.
If logging into a container doesn’t suit your fancy, try using a log aggregator like CloudWatch Logs. This eliminates most of the need to log into production containers.
Time for the opinionated portion of this article.
In keeping with the spirit of of continuous delivery, ECS makes incremental and fresh deployments easy and fast. When it’s time to deploy, container managers leave new instance launches in the dust. Not only are deployments fast, this makes service recovery almost instantaneous. If a container instance becomes unhealthy, the cluster will automatically schedule the creation of a new instance.
In a perfect world, managing dependencies would be trivial, but often this isn’t the case. Apps can depend on unsupported versions of other software. Maybe your app has a dozen outdated usages of libcurl that break after installing a newer version of curl. Operations people should not have to account for developer quirkiness. Their primary concern with application deployment should be keeping a business’s services healthy, available and secure.
Different applications run on different stacks with varying dependencies, but any containerized application can be deployed on a cluster. Maybe your CRUD application is built on PHP, api integrations are built using node and command line runners in Java. Maybe, if these applications were created at the same time by the same team, they would all depend on the same operating system-level dependencies and could run on the same server. This is not always the case. When you containerize individual applications, you control every operating system level dependency.
ECS and services like it make maintaining container-driven architecture practical and delightful. The container ecosystem is maturing and businesses running their applications in containers now have everything they need for deployment in AWS. I hope this document brings light to the wonders of ECS. If you have any recommendations on a better strategy to deploy containers, shoot me an email at email@example.com. We’re always looking for bigger and better ways to deliver our services.
About the Author – Nate Turner
Spreading the gospel of modern software. Especially high concurrency frameworks, test-driven design, devops, tooling and design patterns. At Underground Elephant he works extensively with Vert.x, Amazon Web Services, and Reactive Extensions building performant applications for our clients and internal use.
When he’s not programming he’s traveling, fine dining or wandering around the sunny streets of Downtown San Diego.