Software as piece goods: How Docker is revolutionising the data centre.

Get into the container

Author: Thomas Ulken, Lead Software Architect, Fact Informationssysteme & Consulting Ltd (GmbH)

If you look behind the scenes, modern cloud solutions are from a technical standpoint somewhat reminiscent of the IT models of the 1960s and 1970s, when simple text-based terminals were connected to mainframes in data centres via remote data transmission. Except that today much faster network connections, more sophisticated user interfaces and a thousand times more computing power are available even in the simplest servers.

Nevertheless, for the providers of cloud solutions, it is still a matter of serving as many users as possible in their data centres simultaneously and evenly. This is the only way to achieve cost and efficiency benefits from cloud computing.

Load peaks should be dynamically absorbed and balanced so that users do not have to wait seconds for applications to react to their inputs and queries, as in the past. Cloud services should be as responsive as if they were running on the user’s local device. This is a crucial factor for the success of cloud technology.

Virtualisation and the running of cloud applications in containers play a central role here.

It all began with virtualisation

In the early days of cloud computing, so-called virtual machines were used to serve multiple users on one server. Virtualisation products such as KVM, Xen or VMware simulate to the operating system and respective application the existence of several complete PCs on one server, each with its own CPU, memory, hard disk, BIOS, etc.

This ensures a high degree of compatibility and isolation of the various users and their applications. At the same time, it requires significant additional effort, which becomes increasingly noticeable with a growing number of virtual machines per server.

Ultimately the same software stack has to be booted in each additional virtual machine at start-up, just like on dedicated servers: Operating system, database, libraries and configuration files must be installed individually in each VM and updated as necessary. This leads to high resource consumption and slow start-up times. Performance decreases.

Software as a bulk commodity

The desire for more efficient solutions led to the development of container technology, with the open-source solution Docker as its most prominent representative. Here, complete virtual PCs are no longer simulated, instead just the runtime environments of the various applications are cleverly isolated from each other on one machine. Each application is packaged into one or more software containers that run as independent processes on a common operating system instance. These containers can be started and stopped individually at any time.

It is no coincidence that the term “software container” was borrowed from the real world, where standard sea containers have revolutionised the transport of goods since 1960. Today, containers transport all kinds of goods: bulk goods, pallets, packages, bags or vehicles. This previously required a lot of manual labour as well as a diverse loading and transport infrastructure. Although there are still customised products such as refrigerated containers, tank containers or living containers, this makes little difference to logistics companies thanks to standardisation in terms of size, transport interface and documentation.

The same is now true in the IT world: Software companies deliver their products in the form of standardised software containers; cloud providers push them onto whatever servers in their data centres happen to be free and run them there.

More efficiency through containers

Containers significantly reduce an application’s resource requirements compared to complete virtual machines. Furthermore, the container images are much smaller because they no longer have to contain the operating system kernel, but rather just the application.

As a result, containers start much more quickly and require less memory to run. In addition, they do not need regular updates for the operating system, databases, libraries, etc. Software changes are also much easier with containers because they no longer have to be performed within the production environment.

Instead, changes are tested at the software supplier in a new image (a Docker file). The image in question is then simply swapped on the server and the update is complete.

Above all, containers can be moved back and forth between different servers during operation with no noticeable interruption or data loss. This ensures that each container receives the required amount of CPU power at all times.

Standard solution Docker

Docker has made these mechanisms broadly popular. But Docker is much more than just an application virtualiser. Docker has also established a format for packaging applications, which significantly simplifies their distribution. This is because these containers contain not only the applications themselves, but also the additional libraries and packages that are required individually to run an application.

Running applications in Docker containers Figure: Running applications in Docker containers

Docker also created a standard for how containers can communicate with each other using established techniques such as TCP/IP – both within a server and across computer boundaries.

This enables the tasks of a software solution to be distributed across many independent containers. For example, the application logic is in one container, the required database server in another, an internet gateway in the next one, etc. Where the respective container is currently running is irrelevant.

Generally speaking, Docker is geared towards application virtualisation with Linux. Docker can also be used with Hyper-V or VirtualBox under Windows, however, and with HyperKit or VirtualBox under macOS. Docker is therefore an indispensable part of modern cloud computing.

Also read:

  • The financial industry is where it’s at – Why institutional investors are now betting on cloud computing

    More
  • Interview - Fact Focus spoke with Heiner Brauers – "In the future, clients will focus on control tasks when it comes to disclosure management."

    More
  • Kept under lock and key – Available everywhere, but not for everyone: Security aspects of cloud computing.

    More
  • When apps learned to walk – A brilliant idea and its implementation: The building blocks of cloud computing.

    More
  • The A to Z of cloud computing – A concept in 20 terms. For anyone wanting to have a say on the trendy topic.

    More
  • Quick results with the Fin RP Best Practice Toolkit – Why start from scratch? Better: a quick implementation with the Best Practice Toolkit

    More