Hello, fellow Linux users!
My question is in the titel: What is a good approach to deploy docker images on a Raspberry Pi and run them?
To give you more context: The Raspberry Pi runs already an Apache server for letsencrypt and as a reverse proxy, and my home grown server should be deployed in a docker image.
To my understanding, one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.
My questions:
- Has anyone here had a similar problem but used a different approach to achieve this?
- Has anyone here automated this whole pipeline that in a perfect world, I just push updated sources to the Raspberry Pi, the new docker image gets build and Docker/Podman automatically pick up the new image?
- I would also be happy to be pointed at any available resources (websites/books) which explain how to do this.
At the moment I am using Raspbian 12 with a Raspberry Pi Zero 2 W and the whole setup works with home grown servers which are simply deployed as binaries and executed via systemd. My Docker knowledge is mostly from a developer perspective, so I know nearly nothing about deploying Docker on a production machine. (Which means, if there is a super obvious way to do this I might not even be aware this way exists.)
I wouldn’t build anything significant on the RPi Zero and instead would try to build elsewhere, namely on a more powerful machine with the same architecture, or cross-build as others suggested.
That being said, what’s interesting IMHO with container image building is that you can rely on layers. So… my advice would be to find the existing image supported by your architecture then rely on it to layer on top of it. This way you only build on the RPi what is truly not available elsewhere.
Perhaps a compose file on the raspberry pi. You can have it build the container as part of the compose file, then you can just start your services up with
docker-compose up -d --build
. The only things you would need to do is update the git repos and rerun the up command. You could also script it to pull the git repos before building.What is a good approach to deploy docker images on a Raspberry Pi and run them?
See if they have a git repository and just clone then build the image yourself.
You can export images to tarballs and import them to your local docker daemon if you want.
Not sure how podman manages local images.
Idea:
- Setup inotifywait on your sources directory and rsync to keep directories in sync, and on change, build and run latest image
- Setup inotifywait on your image tarball directory/file and on change import and run latest
- Mount your source directory to a docker server image that supports hot reloading.
Wow, thanks a lot! Your answer is exactly what I hoped for when posting on Lemmy: I didn’t even know the docker-tarball thingy is a thing, it fits my problem space very nice in combination with cross building and it seems as easy as it can be. Excellent! :-)
In case needed, may want to also look into multi-arch images so it also supports the right ARM build for the Pi: https://www.docker.com/blog/multi-arch-images/
one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.
I do it almost exactly like this, except instead of systemd I just start containers with
--restart unless-stopped
.I’m also looking at improving the current setup, but haven’t really made any progress. I’m thinking of setting up a GitOps type workflow, e.g. as described here: https://medium.com/linux-shots/gitops-on-docker-using-portainer-8712ba7d38c9 (disclaimer: I haven’t tried these instructions myself)
Thanks for the link!
You just…build it.
If you’re looking for a way to run it like a system service, you could make a systemd unit I suppose, but it’s little effort to make a compose config, or a quadlet if you’re running podman.
AFAIK Podman only supports quadlets from version 4.4 and later, I am on version 4.3… so, technically you are right and it would work (for me end of the year or next year, when Raspbian gets an update to Trixie), I am mostly interested how people achieved this and automated this and if there are different/better approaches than the ones outlined by me above.
Compose is stateful, so if you start a container and demonize it, it will start it again on restart of the machine until you stop it (by default), so that’s one way. If you want to hook into systems, you can go that route, but it’s not going to net you much just trusting one mechanism over another.
Why not run the image registry on the Raspberry Pi itself? Then you can do your builds on your regular machine and push them to your Raspberry Pi when done.
Would it be a problem, that my development machine is an AMD64 and the Pi is an Aarch64?
You can cross compile images with docker buildx.
Thanks, didn’t know about buildx, but it looks exactly like what I need to solve cross compilation.
So you’re building your own images? I’m sure there’s a way to build, transfer, and run an image. But you might just set up a local registry. Or just throw them up on a free registry like GitHub.
Afaik, systemd has nothing to do with docker, except to start the docker daemon.
I think I have done almost exactly what you want to do, I use gitlab CI to build and deploy my application:
https://github.com/cameroncros/discordshim_rs/blob/main/.gitlab-ci.yml
Gitlab is relatively heavy, so I dont know how it will go on a raspi (I run on a Intel nuc). You can run gitlab on a separate machine, and the CI runner on your Pi.
Thanks for the idea! I try to keep as little ‘moving’ parts as possible, so hosting gitlab is something I would want to avoid if possible. The Raspberry Pi is supposed to be sole hardware for the whole deployment of the project.
Its definitely not a lightweight solution. Is the pi dedicated to the application? If so, is it even worth involving docker?
You are asking exactly the right questions!
I have an Ansible playbook to provision the Pi (or any other Debian/Ubuntu machine) with everything need to run a web application, as long as the web application is a binary or uses one of the interpreters of the machine. (Well, I have also playbooks to compile Python/Ruby from source or get an Adoptium JDK repository etc.)
Right now I am flirting with the idea of using Elixir for my next web application, and it just seems unsustainable for me to now add Erlang/OTP and Elixir to my list of playbooks to compile from source.
The Debian repositories have quite old versions of Erlang/OTP/Elixir and I doubt there are enough users to keep security fixes/patches up to date.
Combined with the list of technologies I already use, it seems to reduce complexity if I use Docker containers as deployment units and should be future proof for at least the next decade.
Writing about it, another solution might simply be to have something like Distrobox on the PI and use something like the latest Alpine.
Up-to-date runtimes definitely makes sense, that is where docker shines.
Gitlab is obviously a bit overkill, but maybe you could just create some systemd timers and some scripts to auto-pull, build and deploy?
The script would boil down to:
cd src git pull docker compose down docker compose up --build
Your welcome to steal whatever you can from the repo I linked before.
Thanks a lot!
Yeah, if I go down that road, I’ll probably just add a git commit hook on the repo for the Raspberry Pi, so that I’ll have a ‘push to deploy’ workflow!
systemd has nothing to do with docker, except to start the docker daemon.
I think what OP was describing was writing systemd unit files that would start/stop docker containers.
Exactly, this is what I am doing right now (just for binaries, not execute Docker or Podman).
Yeah, probably, but thats not very common is it? Normally you’d just let the docker daemon handle the start/stop etc?
I actually have no sense how common that is. My experience is with very small non-production docker environments, and with Kubernetes, but I have no idea what people typically do in between.
It’s common with rootless docker/podman. Something needs to start up the services, and you’re not using a root enabled docker/podman socket, so systemd it is.
Compose files would probably make more sense
Well, someone needs to run
docker compose up
, right? (or you set restart policy, but that’s not always possible)