Containerization is the act of deploying applications via containers in a cloud-based environment. They are commonly compared to virtualization — with Virtual Machines (VM) — as a way of explaining that containers are lightweight and software-controlled alternatives to VMs, expunging most of its drawbacks and ushering a new era of application deployment.
The inception of containerization can be traced back to 2006 and cgroups, the Linux kernel feature that limits, accounts for, and isolates the resource usage of a collection of processes. But it wasn’t until the advent of Docker Engine — an industry standard for containers with simple developer tools and a universal packaging approach — in 2013, that adoption of this technology skyrocketed.
What are Virtual Machines?
Virtual machines are a native emulation of a computer system typically implemented through software, hardware, or both. They allow you to run an operating system as an application on your desktop that behaves like a full, separate computer, by sharing resources from the underlying or host operating system (OS).
VMs are powerful but with such power comes a greater cost of operation. They are resource intensive and tightly coupled to the host OS on which they are run, making it non-portable and harder to maintain.
With the emergence of cloud computing, this became a gut-wrenching problem that, although with its optimization techniques, called for a change in architectural perspective, for something fundamentally different — enter Containerization.
What is Containerization?
Containerization is the packaging of software code with just the operating system libraries and dependencies required to run the code to create a single lightweight executable—called a container—that runs consistently on any infrastructure.
Containers are isolated from the host’s operating system and are lightweight, portable, and resource-efficient. This allows developers to create and deploy stable and reliable applications faster and more securely in any [cloud] environment or OS.
Cloud-native applications and services like microservices, databases, and web servers, are commonly containerized, exposing them to the many benefits of containerization:
Portability
Containers are loosely-coupled to the host environment, and are able to run consistently across any cloud platform or OS.
Speed
Containers are fast to start (and restart) because they share the host OS kernel and are not bogged down with the extra overhead of the OS.
Fault isolation
Containers are run independently of one another. The faults of one container does not affect nor propagate to any other container . This makes it easier to identify which container is at fault and to apply specific fixes.
Resource efficient
Containers package just what they need to run successfully and are not bogged down by the extra overhead of the host OS.
Easier to maintain
Because containers are fragmented into isolating pieces that can be executed and run interdependently, they are far easier to maintain and update without the risk of overflow.
Security
Like fault isolation, there is a smaller surface area for attacks on containers, making it much more secure and easier to guard against attack vectors, both on containers, and the underlying operating system.
Lightweight and highly scalable
Containers are highly scalable. They can be easily replicated, started and restarted because they are lightweight.
Less operational cost and overhead
Containers are far easier and less costly to operate with minimal overhead because they are lightweight, portable, resource efficient, etc.
Virtualization vs Containerization
Containers and Virtual Machines both allow multiple types of software to be executed and run in a single environment, however, containerization delivers significant benefits over those of virtualization and is rapidly becoming the de-facto for cloud-native applications.
In a recent IBM survey, for example, 61% of container adopters reported using containers in 50% or more of the new applications they built during the previous two years; 64% of adopters expected 50% or more of their existing applications to be put into containers during the next two years.
While containerization and virtualization both exist to solve similar problems with overlapping responsibilities and capabilities, their approach is fundamentally different.
Virtualization allows multiple operating systems and software applications to run concurrently and share the resources of a single hardware. A VM as an application encapsulates its executable files, libraries, packages, dependencies, and a copy of the host OS. This leads to non-portability, slow start-up time, resource inefficiency, and inter-operational bugs when run in a different environment.
The overheads of virtualization are exponential at scale, when there are — as is often done — multiple VMs running simultaneously. Before the rise of containerization, virtualization was the standard way of creating isolating applications within infrastructures like web servers or operating systems.
Containerization, in contrast to virtualization, is more resource efficient. Instead of bundling in a copy of the OS, as in a VM, a container creates a single executable package of software that bundles application code together with all of the related configuration files, libraries, and dependencies required for it to run. This subtle shift in architecture gives containerization many benefits — over virtualization — as noted above.