Containers and Virtual Machines both have a role within the modern Cloud architecture. Both possess some advantages and disadvantages. The graphics below summarises the key differences:
Main use case for Cloud Containerisation is application decomposition and micro-services architectures, which need the scalability and fungibility offered by cloud platforms. The rewriting of a monolith into hundreds of ‘Cloud Native’ containerised applications is an example.
Containers are deployed in two ways: either by creating an image to run in a container or by downloading a pre-created image, such as from Docker Hub. Although Docker is by far the largest and most popular container platform, there are alternatives. However, Docker has become synonymous with containerization. Originally built on LXC, Docker has become the predominant force in the world of containers.
Source; SearchCloudSecurity
Mechanics of Cloud Containers
Container technology comes out of the partitioning and chroot process isolation developed in Linux. The modern forms of containers centre on application deployments, such as Docker, and in system containerization, such as Linux Containers (LXC). Both enable an IT team to abstract application code from the underlying infrastructure to simplify version management and enable portability across various deployment environments.
Containers rely on virtual isolation to deploy and run applications that access a shared OS kernel without the need for VMs. Containers hold all the necessary components, such as files, libraries and environment variables, to run desired software without worrying about platform compatibility. The host OS constrains the container’s access to physical resources so a single container cannot consume all of a host’s physical resources.
The key aspect of cloud containers is that they are designed to virtualize a single application. For example, you have a MySQL container, and that’s all it does — it provides a virtual instance of that application. Containers create an isolation boundary at the application level rather than at the server level. This isolation means that, if anything goes wrong in that single container — for example, excessive consumption of resources by a process — it only affects that individual container and not the whole VM or whole server. It also eliminates compatibility problems between containerized applications that reside on the same OS.
Major cloud vendors offer containers-as-a-service products, including AWS ECS, AWS Fargate, Google Kubernetes Engine, Microsoft Azure Container Instances, Azure Kubernetes Service and IBM Cloud Kubernetes Service. Containers can also be deployed on public or private cloud infrastructure without the use of dedicated products from a cloud vendor.
Cloud containers vs. VMs
The key differentiator with containers is the minimalist nature of their deployment. Unlike VMs, they don’t need a full OS to be installed within the container, and they don’t need a virtual copy of the host server’s hardware. Containers can operate with a minimum number of resources to perform the task they were designed for — this can mean just a few pieces of software, libraries and the basics of an OS. This results in being able to deploy two to three times as many containers on a server than VMs, and they can be spun up much faster than VMs.
Once a container has been created, it can easily be deployed to different servers running the same OS. From a software lifecycle perspective, this is great, as containers can quickly be copied to create environments for development, testing, integration and production. From a software and security testing perspective, this is advantageous because it ensures the underlying OS is not causing a difference in the test results.
An important disadvantage of containers is complexity and the problem of splitting virtualization into lots of smaller pieces. When there are just a few containers involved, it’s an advantage because you know exactly what configuration you’re deploying and where. However, if you fully invest in containers, it’s quite possible to soon have so many containers that they become difficult to manage. Just imagine deploying patches to hundreds of different containers. If a specific library needs updating inside a container because of a security vulnerability can your operations team allow this? Container management with Kubernetes or Docker Swarm adds cost and complexity.
Cloud container security
Isolation leads to some security improvements but also some issues resolved by namespace access. Docker containers used to have to run as a privileged user on the underlying OS, which meant that, if key parts of the container were compromised, root or administrator access could potentially be obtained on the underlying OS, or vice versa. Docker now supports user namespaces, which enable containers to be run as specific users.
A second option to minimize access issues is to deploy rootless containers. These containers add an additional security layer because they do not require root privileges. Therefore, if a rootless container is compromised, the attacker will not gain root access. Another benefit of rootless containers is that different users can run containers on the same endpoint. Docker currently supports rootless containers, but Kubernetes does not.
Another issue is the security of images downloaded from Docker Hub. By downloading a community-developed image security cannot necessarily be guaranteed. Docker addressed this starting in version 1.8 with a feature called Docker Content Trust, which verifies the publisher of the image. Images can also be scanned for vulnerabilities which is a partial solution. Proper training for those creating images is critical.
Both are valid
Containers can offer some portability, both internally and in the cloud, coupled with their low cost, which for decomposed, micro-service-based applications, makes them a logical architectural choice.
Obviously VMs have their advantages and use cases. In reality organisations will be deploying both, depending on the use cases.