top of page

Know about container

Containers are quickly replacing virtual machines as the go-to-choice for workload deployment and Kubernetes is the world's most well-known container orchestrator.

Kubernetes is an open-source container orchestration platform that makes it easier to deploy your apps. It automates the deployment, management, scaling,

& networking of your containers.

Containers are the ideal unit of delivery because they encapsulate all aspects of your application, middleware, and operating system (OS) packages into a single package. You can create container images in several ways, and the most popular means of creating an image is by a Dockerfile.

The Dockerfile describes all of the relevant details necessary to package the image.

Compared to Virtual Machines, containers share the OS kernel instead of having a full copy of it – like making multiple VMs in a single host. Although it is possible to put your microservices in multiple VMs, containers are commonly used in this case since it takes up less space and is faster to boot up.

According to Docker (the platform of the most popular container), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It’s available for Linux and Windows apps. It runs the same every time regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure.

Containers are blowing up. Everyone is talking about Docker, Kubernetes, and containerization. In this blog post, I’m going to dive into what a container is, why they’ve become so popular, and expose some misconceptions.

Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standardization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff.

This means a lot of things: your deployments can go packed in a much smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standardization (remember “Convention over configuration”), fewer things to manage and more isolated apps means fewer ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm.

Let’s start by unpacking real-world shipping containers. To a shipper, a container is a box they can put their stuff in and send it around the world with little worry. A shipping container can hold a bunch of stuffed toys tossed in any which way or expensive engineering equipment held in place by specific contraptions. These containers can hold most things however an individual needs them to be held. This is by design.

Shipping containers are built to precise standards so they can be stacked, fit on trains, trucks, and lifted through the air. It doesn’t matter the manufacturer, the container’s specification will work with all of these systems.

There is no container object in Linux, it’s just not a thing. A container is a process that is contained using namespaces and Cgroups. There is also the SELinux kernel module which provides the ability to give and restrict fine-grain control for security purposes.

Namespaces give a container that contained aspects. That is to say, when operating inside of a container this is how it appears to be root with full access to everything. Namespaces provide access to kernel-level systems like

  • PIDs

  • timesharing

  • mounts

  • network interfaces

  • user ids

  • interprocess communication

Since a container is just a handful of kernel structures combined to build a containerized process, it stands to reason that the definition of a container will vary from runtime to runtime. Runtimes will be explored in more detail in a future post.

Container Orchestrator

As containers became popular, people realized they needed an automated way to manage the hundreds or thousands of containers inside a cluster. Within the container runtime, automated orchestration ceased to be an option.

You might have been able to manage a virtualized cluster of a few dozen servers manually. But there is no way you can handle a production-scale container environment without the assistance of automation.

So tools such as Docker Swarm, Marathon, and Kubernetes were developed. Their main job is to automate the provisioning of containerized infrastructure, provide load balancing and discovery for the services that run inside the containers plus enable auto-recovery and scaling of the containers.

At this stage the runaway leader in container orchestration, based on project contribution and ecosystem, is Kubernetes, it was originally conceived by Google and based on their own internal orchestrator Borg. Google has subsequently contributed Kubernetes to the CNCF.


Recent Posts

See All


bottom of page