Docker Explained: What is it? Why does it exist?
Hello Everybody! Tansanrao here, and in my previous post about building a web hosting service with containers, we talked about using something called Docker to build and run our containers. This post is aimed at giving you a semi-detailed idea of what Docker is, what containers are and how they work. Let’s Get Started!
What is Docker?
Docker is a tool designed to make it easier to create, deploy and run Applications in a reproducible way using containers.
Containers allow a developer to package an application with all the libraries, dependencies and everything else it needs to run as one package. With this, the developer can be assured that irrespective of the configurations on the host machine on which the container is deployed, it will run properly and the behaviour will not differ from the machine used when developing and testing the code.
Why Docker over a Regular VM?
One can compare Docker containers to a virtual machine, but instead of creating an entire operating system in a package, Docker allows you to package only the application and it’s dependencies but shares the linux kernel on the host machine.
This allows containers to ship with only the libraries and dependencies that are not already present on the host Operating System and significantly reduces the size of the packages and boost performance and deployment times.
What is a Container?
Containers are a way of packaging an application in a way which is platform independent and extremely portable across various distributions of Linux.
Containers require three categories of software:
- Builder: technology used to build a container
- Engine: technology used to run a container
- Orchestration: technology used to manage multiple containers.
One of the appealing attributes of containers are their ability to gracefully die and respawn on demand. Irrespective of whether a container died due to a crash or simply because it was no longer needed due to low traffic conditions.
This is possible because containers are very cheap to start and they are designed to appear, work and disappear seamlessly and on demand.
Containers are meant to be ephemeral (lasting for a short time), and thus, tasks related to monitoring and managing them are not handled by humans in realtime but are instead collected centrally and automated.
Linux containers have allowed a major shift in high-availability computing, and there are many tools to help develop and run services in containers. Docker in one of many that are compatible with the Open Container Initiative (OCI) Spec. OCI is an industry standards organisation tasked with encouraging innovation and development in the field of containers without the danger of being locked to a single vendor.
Docker provides the functionalities of a Builder and an Engine, Docker Engine depend on containerd to provide the runC runtime. Containerd is an abstraction layer that allows an engine to delegate the tasks of handling syscalls and actually running the container on a host.
Now you may have heard a lot of buzz about Kubernetes, and if you were wondering what it was, it is one of the tools that provides container orchestration.
What is container orchestration? Put simply, Container orchestration automates the deployment, management, scaling, and networking of containers.
Who is it for?
Docker is a tool that is designed for both developers and system administrators, making it part of many DevOps Toolchains and Pipelines.
It allows developers to focus on writing code without worrying about the system that it will ultimately run on. It also allows them to get a head-start in development by using one of the thousands of programs already designed to run in a docker container.
For system-operations staff, Docker allows flexibility and reduces the number of systems needed thanks to the smaller footprint of containers and their lower performance overhead when running.