Written by: Chris Tozzi
Starting with the advent of Docker in 2013, containers have become a fundamental building-block of modern IT infrastructure. Whether you host applications in the cloud, on-premises, or both, containers are often an excellent choice for building scalable, agile application environments.
A container is an isolated environment in which an application or part of an application can run.
In this context, the term “isolated” is relative. The processes that run inside a container are isolated from processes running in other containers on the same server. However, containers share some resources with the host operating system, so they are not entirely independent environments.
Video: Containers is one of the most exciting innovations in application development and cloud computing, but organization looking to leverage containers need to know the best way to secure them.
You can think of containers, then, as a lightweight form of virtualization. They provide a level of abstraction and isolation that would not be available if you hosted applications directly on a server. But they are not as isolated as virtual machines, which host their own, totally independent operating system environment.
Although containers aren't always the best way to host applications, they offer some important advantages compared to running applications directly on a host server or using virtual machines.
Because containers abstract applications from the host operating system, it is easy to deploy containers without applying any special configurations to the host. No matter which operating system version you are using or how it is configured, you can typically deploy a container on it quickly and easily, provided the system supports the container runtime you are using.
Compared to virtual machines, containers start faster and consume fewer resources. It is also easier for applications hosted inside a container to access bare-metal hardware on the host system, a feature that can be useful when running applications that offload processing to GPUs, for instance.
Technology that makes it possible to host applications or processes inside self-contained environments while still sharing some resources with the host operating system has been around since the late 1970s, when Unix gained the chroot feature. BSD jails, LXC, and Solaris Zones are also examples of container-like technologies that have existed for at least a decade.
However, it wasn’t until the introduction of Docker in 2013 that containers began entering into widespread use. Kubernetes, which became an open source project in 2014, spurred even more interest in containers by providing an efficient orchestration framework for managing containers.
Sometimes when people refer to containers, what they’re really talking about are container images.
A container image is the set of instructions and resources that are used to create a container. A container itself is what you get when you launch an application or microservice based on the instructions inside a container image.
Thus, if an admin says she is going to “download an NGINX container,” what she really means is that she’ll download a container image that can be used to launch instances of the NGINX Web server inside containers.
To make it easy to share container images and manage different image versions of the same application, teams typically use container registries. A container registry is a repository where developers can upload images and users can download them.
Container registries can be public, meaning that the images they store may be downloaded by anyone on the Internet. Docker Hub is an example of a popular public registry. (It’s possible to restrict access to container images on Docker Hub if you wish, but the platform is most often used for sharing images with the public at large.)
Alternatively, an organization that uses containers in-house may opt for a private registry, which it can use to store and distribute container images for its line-of-business or other applications.
Containers can run on-premises or in the cloud, and they offer the same general scalability and agility benefits in both types of environments.
However, running containers in the cloud can be simpler because public cloud vendors offer managed container services (such as ECS in the Amazon Web Services cloud and Azure Containers on Azure) that eliminate the need to set up and provision your own hardware for hosting containers. You can also take advantage of managed orchestration services, like Amazon EKS and Azure AKS.
While containers make it easy to deploy applications anywhere and start them quickly, they add another layer to your software environment that needs to be secured. It’s important to scan container images for malware, enforce strong access controls in container registries, and deploy container runtime security tools to mitigate vulnerabilities that may become active when a container is running.
Learn more about securing containers with this eBook.