Docker is a virtualization technology that allows the packaging of an application along with all its dependencies and libraries in one container. This resulting software container can be run on machines other than where it was created with the sole premise that they have the Docker containerization system.
Containerization technology is more efficient than virtualization of the entire operating system because it does not need to replicate the entire system but only what is necessary for the execution of the container. Docker uses Linux cgroups technology to manage container resources and achieve isolation between them.
The main advantages that containerization offers us are the following:
It provides consistency, since it contains both the application code and the libraries it uses, whether they are its own or those of third parties, plus the complete execution environment, for example in a Java application we would have the developed code, plus all the libraries it uses, plus the complete application server. Much fewer points of failure when running the application.
We achieve standardization and productivity, by using the same image for all environments, development, integration, quality, pre-production, production or whatever it is, it is easier for us to manage it, label it, make backups, rollbacks … It
helps in continuous integration and application of the DevOps philosophy because it facilitates the automated execution of tests on the software and the promotion among the different existing environments.
It improves security by having separate and isolated containers since if an intruder compromises the application that runs inside a container, its scope is limited to that container and not to the entire system.
It is multi cloud, since in all public and private clouds they have environments that allow the execution of Docker containers.
On the other hand, as the number of containers grows, a system that manages and automates operations with them begins to be needed, which is why platforms such as Kubernetes, Rancher, Openshift appeared …
With containers and the platform for their management, the Greater advantage provided by this technological system, which is dynamic autoscaling depending on the load, very useful for large websites with many visits and with an irregular load. I explain it with an example, if we have a web application of a company where the number of visits is different depending on the time of day, the day of the month or the month of the year, which is normal, with containerization technology we can configure that when the system load grows and exceeds a certain threshold, new containers are installed to absorb this increase in demand and when that load decreases, the created containers are eliminated, thereby using only the necessary resources at all times and thus reducing the cost hardware provisioning.
So far everything very nice but … what do we have in most of the Bitcoin full nodes?
A basic hardware to run a full node along with other applications to extend functionality and facilitate management, are usually basic boards with a processor with few resources, with little CPU and with little RAM. In the case of BCubium even more because our hardware is very limited because we want to have a small node and we are running bitcoin-core, LND, BTC-RPC-Explorer, RTL, the node administration website and some other tool, and We do not see any gain for running it with Docker because:
The applications that are run have been developed by third parties, they already give them fully functional, they install without problems without the need to containerize them.
When they are already installed, they automatically incorporate all the libraries they need and those they need from the Linux operating system with their package system, allowing them to self-install.
We are not going to make modifications to these applications, that is, they are not going to have a development made by us, so I do not have to run any continuous integration system, test batteries, or promotion between environments.
The fact of executing them containerized supposes an extra consumption of resources since each container comes with its own operating system plus the tools that the container management needs, plus the libraries necessary for its execution, causing it to have duplicate libraries and tools in the different containers and in the end forces us to have a system with more powerful hardware when what we want is just the opposite, a minimal system.
For our part, at BGeometrics we carry out a development that is the node administration website, which is a small website, so there is no great profit for developing it using containers and we also have a set of scripts that use many applications and utilities of the operating system and It does not matter that they run in an isolated container because it would mean having to install all these tools in the containers and some of them need to have access to the guest operating system.
Docker execution complicates management as we now have, in addition to having to perform the operation of the installed software, we must monitor the execution of each container and the communications between them, which among other things means that we have to have another network interface in the system. .
In our case, our system runs on a motherboard and a processor, and we have no interest in running it on a public cloud, so we lose the ease that migration containers bring to the cloud.
There is no point in dynamic autoscaling which is the great advantage of containerization since you are not going to install a system like Kubernetes when we have a few containers, the load is very low and constant and access to applications like BTC-RPC-Explorer or RTL is done very occasionally and by a single person …
Summarizing Docker is a great technology that is already here and will continue with us for a long time, if we add to that the orchestrators like Kubernetes and on these Ranchers, Openshift … we have the complete kit to achieve a customized dimensioning, but in a case like ours where what we want is to run software on minimal hardware we do not need those advantages and we avoid adding complexity to the system and having a greater consumption of resources.