Jon Brookes
2025-07-20
The way infrastructure has moved on since the days of the 90’s is truly remarkable. Containers and container orchestration have become so prevalent that some folks are no longer aware that there was anything before containers. There was of course and most computing relied on many single physical servers and a plethora of network and storage infrastructure and much, much more. It still does, its just that all this is hidden from view, abstracted away from us by layers of virtualization.
Sometimes we need to run our own cloud, on our own hardware. This can be literally self hosted in a lab or in our own server farm, data center or on managed hardware. Indeed, many are turning to such in order to take control of ever increasing monthly cloud hosting bills.
There are now more choices in how we do this than ever before. Back in the day, we had a single option, build a server. The only constraint was really, how deep were our pockets. The bigger the server, the higher the cost. The higher the cost, the higher the specification but workloads would be higher. Knowing the ‘sweet spot’ of cost .vs. efficiency was always going to be a hard call.
Fast forward to today and we have several options but to keep things simple, let’s concentrate on a typical VM based approach that can be applied to both self hosted and cloud provided.
Linux Containers - which is a way to partition a linux server into 1 or more instances, each sharing the same kernel. This can be thought of as having separate ‘root’ filesystems, each securely separated from each other. The Operating System Kernel is used by all containers, which limits this to being only being Linux.
Container orchestration - docker, swarm and kubernetes being common ones but there are others such as Nomad, or Apache Mesos. Often container orchestrations are run within Virtual Machines but could be run in Linux containers, even on bare metal.
The difficulty now can be knowing which to choose, so lets compare LXC, Docker, and Kubernetes in terms of kernel isolation, orchestration, primary use case and complexity and portability:
Feature | LXC (Linux Containers) | Docker | Kubernetes |
---|---|---|---|
Kernel Isolation | Shares the host OS kernel. Isolation is achieved through Linux kernel features like cgroups and namespaces, providing process and resource separation. Less isolated than VMs, but more isolated than chroot. | Shares the host OS kernel. Utilizes cgroups and namespaces for process, network, and file system isolation. Similar isolation level to LXC, but with a focus on application portability and simplified packaging. | Shares the host OS kernel (for the underlying containers). Kubernetes itself is an orchestration layer, not a container runtime. It manages containers that run on Docker, containerd, CRI-O, etc., which provide the kernel isolation. |
Orchestration | Primarily a standalone container technology. Orchestration typically involves manual scripting or external tools (e.g., Ansible, Chef, Puppet) for managing multiple LXC containers across hosts. Limited built-in orchestration features. | Docker Swarm (built-in orchestration tool) provides basic clustering, service discovery, load balancing, and scaling. For more complex, large-scale deployments, it’s often used with external orchestrators like Kubernetes. | Full-fledged container orchestration platform. Provides advanced features like automated deployment, scaling, self-healing, load balancing, service discovery, rolling updates, secrets management, and declarative configuration for large-scale containerized applications. |
Primary Use Case | Lightweight virtualization, often for system-level containers, or to run multiple Linux distributions on a single host. | Packaging and running individual applications in isolated environments, facilitating CI/CD pipelines, microservices architecture. | Managing and scaling large, distributed containerized applications in production environments, often across multiple hosts and private/public cloud providers. |
Complexity | Relatively simple to set up and manage individual containers. More complex for large-scale deployments without external tools. | Relatively easy to get started with individual containers. Docker Swarm adds some complexity but is generally simpler than Kubernetes. | Significant learning curve and operational overhead due to its vast feature set and distributed nature. Requires expertise in networking, storage, and distributed systems. |
Portability | Less portable across different Linux distributions or non-Linux systems due to direct reliance on host kernel features. | Highly portable due to self-contained images. Can run consistently across various Linux distributions and even on macOS/Windows (via VMs). | Excellent portability across various cloud providers and on-premise infrastructure once applications are containerized. |
Some take a so called middle ground, using a physical server or an existing VM to create ‘Linux Containers’ where each ‘container’ runs atop of the host. This gives a more ‘operating system’ like experience to that of containers, but with a fraction of the overheads demanded by full virtualization. This is where LXC & LXD can come in. In recent years there was controversy over the way that LXD was managed by Canonical. This led to its source being forked to become Incus.
Another alternative to LXD and Incus for Linux Containers is also proxmox and is popular amongst self-hosters and home labbers alike. As both Incus and Proxmox can create fully virtualized machines this also could work on a VM, but would be better suited to implementation on dedicated hardware.
Often self-hosters who use this kind of virtualization on their own home labs can quickly find themselves growing their infrastructure. Before they know it they have mini racks of servers in an office, garage, cupboard or under the stairs. It is here that the full power of hypervisor technologies such as Proxmox and Incus will come to the fore.
This can indeed be a path, sometimes the only path, to understanding hardware, delivery and cloud infrastructure when we now no longer have access to the servers we buy and run on big tech’s cloud infrastructure. Even governments find themselves hosting services atop servers that they will never see or touch, hidden deep in a data center somewhere and solely run by their cloud providers.
So if you want to learn this stuff, this can be an opportunity to do so if not in a more limited way than being able to go and work in a ‘real’ data center.
There are now many options for architecting infrastructure atop of our own or other peoples hardware that we ‘rent’ from them.
Typically, Kubernetes gets deployed to Virtual Machines ether for us by cloud providers, using their own, managed hypervisors in the background or by ourselves in self hosted environments using the likes of VMWare, Proxmox, Windows Hyper-V.