Minimal Viable Kubernetes

Jon Brookes

2025-07-22

Photo by Ann H: https://www.pexels.com/photo/wooden-blocks-spelling-countdown-to-launch-33090486/

Introducing Minimal Viable Kubernetes

Minimal Viable Kubernetes (MVK) is a streamlined, self-contained Kubernetes implementation designed for maximum portability and ease of management. It’s engineered to run efficiently on a single host, making it simple enough to maintain without the need for a dedicated platform team, while still mirroring the structural patterns of a production environment.

MVK isn’t just for small-scale use; it’s built to scale seamlessly to three or more hosts, transforming into a robust reference architecture for deploying business-critical workloads in real production settings.

A key focus of MVK is its suitability for self-hosted environments. It aims to fulfill dependencies typically provided by cloud services directly within MVK itself, significantly boosting its portability across various infrastructure landscapes. This hyper-converged approach means that compute, storage, and networking resources are integrated, simplifying deployment and management.

MVK should also be capable of being scaled to 3 or more hosts to become a reference architecture for real production and to deploy business critical workloads.

Its target deployment space is to a self hosted environment where dependencies upon cloud provided services are fullfilled as much as possible within MVK, thus increasing its portability to other environments.

infctl: Your Gateway to Minimal Viable Kubernetes

infctl is the command-line utility at the heart of Minimal Viable Kubernetes (MVK), designed for rapid and reliable deployment of your MVK stack. We built infctl with a few core principles in mind:

  • Simplicity and Transparency: infctl is intentionally small and straightforward. Its source code is concise enough to be reviewed and understood in a single session. Crucially, it doesn’t hide the underlying Kubernetes manifests, scripts, or kubectl commands. This transparency allows curious minds to easily learn from and extend MVK, providing a clear window into its operations.
  • Empowering User Understanding: Ultimately, infctl aims to get out of your way. Once you’ve gained a deep understanding of MVK, you can easily replace infctl with your own preferred tools or methods. It’s a stepping stone, not a permanent dependency.
  • Flexible Workflow Orchestration: Beyond initial deployment, infctl is designed for extension. It can serve as a lightweight pipeline for orchestrating workflows in various environments—whether that’s your local development setup, a cloud-based CI/CD pipeline, or a remote host. Its flexibility and minimal footprint make it a valuable tool in any of these scenarios.

Your Path to MVK

The MVK documentation is designed to be your comprehensive guide to Minimal Viable Kubernetes. It’s an instruction-led resource that covers everything from introduction and installation to getting started and configuring infctl.

This brand-new documentation emphasizes microlearning, beginning with practical steps for local development environments. You’ll learn how to create a local k3s environment using k3d and execute initial infctl pipeline runs.

Our goal is to take you on a complete journey from zero to hero with MVK. The documentation will guide you through deploying real workloads using self-hosted ingress, storage, and load balancing. Importantly, we focus on solutions that don’t abstract these critical components to third-party cloud providers, allowing you to build a truly hyper-converged Kubernetes cluster that you fully control.

MVK Components

MVK has to take an opinionated approoach in order to provide a clear path to achieving its goals of being minimal and viable.

K3s and K3d are chosen as tools to deploy MVK as they are well established, known in the industry and provide greater efficiencies over traditaional kubeadm based k8s binary distribution.

It is intended that the user could extend and enhance their own MVK to use any K8s distribution, based on the same principles as MVK.

Future plans for MVK would be to optionally deploy to Talos.

To understand our thinking here is a comparison between these main players:

FeatureK3s (Lightweight Kubernetes)Talos (Immutable Kubernetes OS)Docker Swarm (Docker-Native Orchestration)Kubeadm (Kubernetes Bootstrap Tool)
FootprintExtremely Small: Single binary (<100MB), low RAM/CPU. Ideal for edge, IoT, small servers.Minimal OS: Purpose-built, immutable OS for K8s. Very small kernel and userland, reducing attack surface.Lightweight: Built into Docker Engine. Relatively low overhead compared to full Kubernetes.Base Kubernetes: Installs core Kubernetes components. Requires a full Linux OS and manages no underlying OS itself.
Orchestration ModelKubernetes API: Full Kubernetes API compatibility with removed non-essential features.Kubernetes API: Full Kubernetes API compatibility. OS is managed through the Kubernetes API itself.Docker Swarm API: Simplified, Docker-native orchestration. Not Kubernetes API compatible.Kubernetes API: Installs and configures a standard Kubernetes cluster.
Upgrade BehaviourSimplified: Can be automated with tools like system-upgrade-controller or manual binary replacement. Designed for ease.Atomic & Immutable: OS and Kubernetes upgrades are image-based and transactional, with A/B partitioning for easy rollbacks. API-driven.Rolling Updates: Supports rolling updates for services and nodes. Less granular control over cluster state than K8s.Manual & Component-Based: Requires manual steps for each component (kubeadm upgrade, then kubelet and CNI updates). More involved.
HA SupportRobust: Supports external SQL backends (PostgreSQL, MySQL, Etcd) for highly available control planes.Built-in & Opinionated: Designed for HA from the ground up, leveraging Etcd and an immutable OS for reliable cluster state.Built-in: Supports multiple manager nodes using Raft consensus for HA. Simpler to set up than K8s HA.Manual Configuration: Requires careful setup of external load balancers, multiple control plane nodes, and Etcd cluster for true HA.
Underlying OS ManagementRequires OS: Runs on standard Linux distributions (Ubuntu, CentOS, etc.). User is responsible for OS updates/maintenance.Integrated OS: Talos is the operating system, purpose-built for Kubernetes, managing its own updates and configuration.Requires OS: Runs on standard Linux distributions. User is responsible for OS updates/maintenance.Requires OS: Runs on standard Linux distributions. User is responsible for OS updates/maintenance.
Ideal Use CasesEdge computing, IoT, local development, CI/CD, resource-constrained environments, small-to-medium production.Immutable infrastructure, secure edge deployments, bare-metal Kubernetes, opinionated production environments.Simple container orchestration, rapid deployment for Docker users, local development, small-scale deployments.Production-grade Kubernetes clusters, highly customized deployments, deep understanding of Kubernetes internals, large-scale infrastructure.
Learning CurveModerate (easier than full K8s, still K8s concepts)Moderate to High (due to immutable nature and API-driven OS management)Low (familiar for Docker users)High (requires strong understanding of K8s components and manual configuration)
Ecosystem & ExtensibilityGood (standard K8s APIs, some stripped-down features means less immediate plug-and-play with advanced cloud-specific K8s features).Good (standard K8s APIs, but its immutable nature might require different approaches for some extensions).Limited (Docker-specific tools, less broad ecosystem than Kubernetes).Excellent (full Kubernetes ecosystem, all standard tools and extensions are compatible).

Picking through this decision matrix is our path of several years of comparison, exprimentatoin and practice.

Our path to MVK

Using infrastructures that may be simpler to maintain but based upon single server strategies inevitably leads to :

  • a single point of failure in that, if one server has any critical issue, all service is lost until the whole server is recovered
  • only having vertical scaling as the only immediate option as workloads increase or decrease
  • downtime should failures occur or as the main server is taken down and restarted to change storage or compute

This is why many opt for cluster technologies that enable work loads to be distributed across multiple hosts in an ‘active active’ mode where all nodes do some kind of work load bearing. In the event of a failure of 1 or more node in the cluster, work is automatically migrated to the remaining working nodes.

The problem this poses when trying to keep costs down and efficiencies high is that most modern cluster technologies require for there to be at least 3 ‘nodes’ ( or servers ) for this load sharing and reallocation to take place.

Over time the cost of running so called ‘micro VMs’, even physical hosts, has fallen. For some use cases, this is a more realistic proposition than before, so long as the right kind of clustering software is used. A degree of high availability and horizontal scaling can now be in reach for small businesses and enterprises alike at relatively low cost. It is most likely cheaper as compared to so called managed Kubernetes offered by cloud providers.

If sufficient workloads are deployed it can be cheaper for an organisation to run their software on self hosted clusters than to run each application on a PaaS or SaaS service that many commonly flock to use.

Kubernetes has become a dominant standard in this space but when considering k8s, docker swarm should also be considered, if only to see if it could be a suitable alternative. Some large infrastructures run on swarm mode. Still, there are some that say that Docker Swarm Mode is in decline in its growth, use and support.

K8s could not have this said of it, rather the opposite. Despite the complexity and learning curve associated with Kubernetes, some opt for this even for smaller deployments due to its wide usage, rapid and ongoing development and the amount of support both free and paid available for it.

Even though 3 nodes are required for availability, another option with K8s and Swarm technologies is to run either on a single instance. This may seem counter intuitive at first but this is common practice and viable for servicing development and some edge applications.

An example of this is kubesolo. Whilst lacking high availability, continuous zero downtime deployment is possible, offering smoother workflows at a reduced cost of ownership but with caveats.

A truly lightweight Kubernetes is ultimately achieved,I believe, with Talos by Sidero

This is not necessarily one I could wholeheartedly recommend right now for production scenarios where there are customers and service level agreements (SLAs) in place. This is because I would only recommend things I have fully deployed in production. However I hope for that to change very soon.

k3s from Rancher is still a good option for a minimal viable Kubernetes and one with which I have (much) more flying time. It can be run in single host mode, as previously mentioned, but it also has a lower binary footprint and hardware requirement than traditional Kubernetes. It can be deployed with etcd, rather than its default single SQLite based configuration which again, another single point of failure.

That said, over time it would be logical to build towards using Talos or something similar as this makes total sense. It is itself a Linux distribution dedicated to K8s. Talos reduces its binary footprint at the operating system level significantly over that of Kubernetes, k0s, k3s and other k8s distributions that are inevitably installed over the top of Linux. Each time we run Kubernetes on a general purpose Linux, this has a less known impact on security - typically distributions like Ubuntu will have 1000’s of binaries where as Talos will have typically less than 100.

Join us on our path to MVK at Minimal Viable Kubernetes. We look forward to meeting you there.