Getting to Grips with Building Your Own Containers
Published on: 2026-02-06 Author: Jon Brookes

Building our own Containers, but WHY ?
So, you've tried to get to grips with building your own containers but you are in tutorial hell and nothing makes sense.
But it is finally time to ship your app and you need to understand and own the process. Building your own containers empowers you to ship to your own cloud or someone else's that needs your apps to be containerised.
Or perhaps your one of the folks that say "I'm done with docker" and rely heavily on other people's SaaS, BaaS to package up your apps for you but the bills are coming in. And they are unexpected !
I hope to break down my approach to building containers and to pass on my experiences so you don't have to learn the hard way.
Where I'm coming from
Back in the 2010's containers had the name 'Docker containers' which they are still know by but as in search, containers got the company that was dominant in the field adopted as the verb. I was fascinated and eager to adopt this then new technology. Containers are not a new thing, Linux Containers, LXC for example, and other approaches being around well before this but Docker containers promised reliability, repeatability an the end of 'it works on my laptop' arguments.
I learnt and used Unix from the 90's and then Linux has become almost a replacement for Unix in peoples minds but I think that having at least more than a passing interest in Unix and how to install and run it must be the foundation of getting to grips with containers. So if you don't have this interest, likely working with containers is not for you.
Good news is that you don't need to have mastered Linux from Scratch, Gentoo, Arch or NixOS to understand containers but these things can help. Trying to learn everything all at once however, can lead to overload and burn out. So I'd encourage taking it easy to started. Jus focus on core principles.
For all of the code in this article, you can see, download and run it all from a development branch on Codeberg. I'm coding in public a lightweight, headless Content Management System (CMS)
If your unable to find my latest Dockerfile check for a development branch but likely this will be a part of the current dev branch if not main as the project reaches a stable state.
The Container Entry Point.
I think Entry Point is key to understanding containers.
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]I've sneaked in CMD after ENTRYPOINT so as to introduce this also and to explain is happening.
The entry point is like a kind of hook. It is called when the container starts up and it is often used, as it is here, to run the starting process for the container.
In Unix, there is a single process that is started and from which all other processes run. You could think if it as being 'the one process to rule them all'. Its responsibility is to be the first, marshalling process, by which the entire operating system boots. In this example, we use tini which is a light weight process used in containers typically to do this as efficiently as possible.
Tini runs an entrypoint.sh script in which we can include procedures and commands that must take place on application start. In this use case, should this step be missed, there would need to be manual intervention to:
run migrations
clear cache
apply configurations
CMD provides arguments for ENTRYPOINT to run.
If we did not specify an entry point, we would have to at least specify CMD . CMD would then become the 'command to run' on container start up. So it would become the 'default' entrypoint.
As you look around different containers, sometimes you see just a CMD command and nothing else and that is fine. In other cases you will see the both. it just carries with it less functionality.
There is quite a lot going on then with this example of just 2 lines. The CMD line also calls supervisord and a configuration file. This is another layer of configuration and it is fair to say, complexity.
Supervisor is a tool that has been around for a while and is used to keep proceses runnig in a sort of 'stack' or 'configuration'. The configuration file for this 'stack' looks something like this :
[supervisord]
nodaemon=true
loglevel=info
[program:php-fpm]
command=/usr/local/sbin/php-fpm -F
autostart=true
autorestart=true
startretries=3
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
startretries=3
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:queue-worker]
command=/usr/local/bin/php /var/www/artisan queue:work --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
startretries=3
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
The above configuration is running 3 programs for us as 'services'. You could think of supervisord as a kind of 'service manager'. It manages each program to auto start and will restart any of these programs should they fail.
Logs are echo'ed to standard out. This is a convention in container land and accepted practice - this is so that other systems can easily gather and collate logs from all of your containers in the one place.
The Elephant in the Room
Ah, but I hear some veterans of the dark arts of containers say, but you have committed the mortal sin of not having 1 container per service. This should have been broken up into at least 3 containers !
And you would be right to say this. So long as you are Google or of similar size but I am not. As a solo programmer / entrepreneur I am trying to keep things stupid simple. If I can run one container for my simple app that needs 2 things aside of Laravel to survive, I can keep my operating costs down by having a single, multi function container. I can get more out of my infrastructure.
Within a single container, each process has access to other running processes. The web server for example, has full access to the PHP Laravel applications, their relative file systems, environment variables and even their process ids.
If each program were separated into 3 containers, each would need to be given extra cloud resource,. Shared file systems, secrets and likely queue mechanisms to mention a few and more can easily become requirements.
All these additional options make sense when it is time to scale and after you have your first 50 customers. All of these extra tools and patterns can easily be applied later. For now at least, I am happy with a 'fat container'.
When it is time to, I can split each program into separate containers, container sets, deployments and so on, as we scale to the size of Google or Netflix. Mean time, I can save on some cash, keep my team down to 1 or 2 engineers to run a smaller stack plus pas on lower operating costs to clients and save the planet.
The Base Image
Container base images come in many different shapes and sizes. It is possible to create containers for 'bare images' with little more than a file system. It is not uncommon however to use more mainstream images based on typical Linux mainstream distributions such as Debian or RedHat. The follwing FROM statement at the head of a Dockerfile uses an Alpine OS image:
FROM php:8.4-fpm-alpine3.23 Fro now, this is the third concept in my summary of core container concepts.
This PHP alpine image has tooling for PHP applications. I used Alpine as this tends to use less storage space than other OSs such as Ubuntu, Debian, Rocky or Fedora. Alpine has a different package management system called apk. This differs to the more commonly known apt or dnf usded by Debian or Fedora but nothing to fear as you cant resolve syntax differences in a few minutes with a competent AI.
Building applications with an image that is as close to your application requirements as possible means less work for you. I'm building a Laravel app, so with PHP-fpm I have all of the main tools I need.
Summing Up
At the time of writing, the Dockerfile used for my Laravel application is built to be 'fat' - having multiple programs running in the one container and manage by a supervisor.
This can, and will be changed as time and use changes but I hope you can see how a single container can do a lot and deploying a typical web application can be relatively simple and repeatable.
If using supervisord, a Python application, other alternatives could be used to reduce image size and increase efficiency. A supervisord Golang port could be used in its place. Another alternative could be s6 overlay, a suite of c applications that are optimised for containers.
The image created by the Dockerfile in my share-lt application is quite big - currently over 900MB and I have been able to reduce this by
Optimising layers within the Dockerfile ( by rationalising RUN statements
Replacing pyton applications like superviosrd with compiled options
Using mult-stage builds in the Dockerfile - to reduce the amount of build tools in the final image
And you will likely see these and other changes depending on when you read this article. Already the share-lt application is in a more advanced state than only a few weeks ago and I continue to add features and tune as I go along.
As with my previous article, this one also is written using the same application and now running in the container that we discussed above.
Let Me Know
So what about you ? Have you started your path to to container nirvana or are you undecided ? Are you already a master of all things container and have a few tips and tricks you can share ?
Let me know - contact me on Linkedin or my other socials. I look forward to hearing how you get on.
Photo by Jan van der Wolf: https://www.pexels.com/photo/rusted-ring-on-quay-wall-by-the-sea-30222021/