Hello, fellow Linux users!

My question is in the titel: What is a good approach to deploy docker images on a Raspberry Pi and run them?

To give you more context: The Raspberry Pi runs already an Apache server for letsencrypt and as a reverse proxy, and my home grown server should be deployed in a docker image.

To my understanding, one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.

My questions:

  • Has anyone here had a similar problem but used a different approach to achieve this?
  • Has anyone here automated this whole pipeline that in a perfect world, I just push updated sources to the Raspberry Pi, the new docker image gets build and Docker/Podman automatically pick up the new image?
  • I would also be happy to be pointed at any available resources (websites/books) which explain how to do this.

At the moment I am using Raspbian 12 with a Raspberry Pi Zero 2 W and the whole setup works with home grown servers which are simply deployed as binaries and executed via systemd. My Docker knowledge is mostly from a developer perspective, so I know nearly nothing about deploying Docker on a production machine. (Which means, if there is a super obvious way to do this I might not even be aware this way exists.)

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    10 hours ago

    I wouldn’t build anything significant on the RPi Zero and instead would try to build elsewhere, namely on a more powerful machine with the same architecture, or cross-build as others suggested.

    That being said, what’s interesting IMHO with container image building is that you can rely on layers. So… my advice would be to find the existing image supported by your architecture then rely on it to layer on top of it. This way you only build on the RPi what is truly not available elsewhere.

  • Voytrekk@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 hours ago

    Perhaps a compose file on the raspberry pi. You can have it build the container as part of the compose file, then you can just start your services up with docker-compose up -d --build. The only things you would need to do is update the git repos and rerun the up command. You could also script it to pull the git repos before building.

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 hours ago

    What is a good approach to deploy docker images on a Raspberry Pi and run them?

    See if they have a git repository and just clone then build the image yourself.

  • assaultpotato@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    23 hours ago

    You can export images to tarballs and import them to your local docker daemon if you want.

    Not sure how podman manages local images.


    Idea:

    1. Setup inotifywait on your sources directory and rsync to keep directories in sync, and on change, build and run latest image
    2. Setup inotifywait on your image tarball directory/file and on change import and run latest
    3. Mount your source directory to a docker server image that supports hot reloading.
    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 hours ago

      Wow, thanks a lot! Your answer is exactly what I hoped for when posting on Lemmy: I didn’t even know the docker-tarball thingy is a thing, it fits my problem space very nice in combination with cross building and it seems as easy as it can be. Excellent! :-)

  • hades@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.

    I do it almost exactly like this, except instead of systemd I just start containers with --restart unless-stopped.

    I’m also looking at improving the current setup, but haven’t really made any progress. I’m thinking of setting up a GitOps type workflow, e.g. as described here: https://medium.com/linux-shots/gitops-on-docker-using-portainer-8712ba7d38c9 (disclaimer: I haven’t tried these instructions myself)

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    You just…build it.

    If you’re looking for a way to run it like a system service, you could make a systemd unit I suppose, but it’s little effort to make a compose config, or a quadlet if you’re running podman.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 hours ago

      AFAIK Podman only supports quadlets from version 4.4 and later, I am on version 4.3… so, technically you are right and it would work (for me end of the year or next year, when Raspbian gets an update to Trixie), I am mostly interested how people achieved this and automated this and if there are different/better approaches than the ones outlined by me above.

      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        23 hours ago

        Compose is stateful, so if you start a container and demonize it, it will start it again on restart of the machine until you stop it (by default), so that’s one way. If you want to hook into systems, you can go that route, but it’s not going to net you much just trusting one mechanism over another.

  • Domi@lemmy.secnd.me
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    Why not run the image registry on the Raspberry Pi itself? Then you can do your builds on your regular machine and push them to your Raspberry Pi when done.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 hours ago

      Would it be a problem, that my development machine is an AMD64 and the Pi is an Aarch64?

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 hours ago

          Thanks, didn’t know about buildx, but it looks exactly like what I need to solve cross compilation.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    So you’re building your own images? I’m sure there’s a way to build, transfer, and run an image. But you might just set up a local registry. Or just throw them up on a free registry like GitHub.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 hours ago

      Thanks for the idea! I try to keep as little ‘moving’ parts as possible, so hosting gitlab is something I would want to avoid if possible. The Raspberry Pi is supposed to be sole hardware for the whole deployment of the project.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        22 hours ago

        Its definitely not a lightweight solution. Is the pi dedicated to the application? If so, is it even worth involving docker?

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          19 hours ago

          You are asking exactly the right questions!

          I have an Ansible playbook to provision the Pi (or any other Debian/Ubuntu machine) with everything need to run a web application, as long as the web application is a binary or uses one of the interpreters of the machine. (Well, I have also playbooks to compile Python/Ruby from source or get an Adoptium JDK repository etc.)

          Right now I am flirting with the idea of using Elixir for my next web application, and it just seems unsustainable for me to now add Erlang/OTP and Elixir to my list of playbooks to compile from source.

          The Debian repositories have quite old versions of Erlang/OTP/Elixir and I doubt there are enough users to keep security fixes/patches up to date.

          Combined with the list of technologies I already use, it seems to reduce complexity if I use Docker containers as deployment units and should be future proof for at least the next decade.

          Writing about it, another solution might simply be to have something like Distrobox on the PI and use something like the latest Alpine.

          • CameronDev@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            13 hours ago

            Up-to-date runtimes definitely makes sense, that is where docker shines.

            Gitlab is obviously a bit overkill, but maybe you could just create some systemd timers and some scripts to auto-pull, build and deploy?

            The script would boil down to:

            cd src
            git pull
            docker compose down
            docker compose up --build
            

            Your welcome to steal whatever you can from the repo I linked before.

            • wolf@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 hours ago

              Thanks a lot!

              Yeah, if I go down that road, I’ll probably just add a git commit hook on the repo for the Raspberry Pi, so that I’ll have a ‘push to deploy’ workflow!

    • hades@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      systemd has nothing to do with docker, except to start the docker daemon.

      I think what OP was describing was writing systemd unit files that would start/stop docker containers.

      • wolf@lemmy.zipOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 hours ago

        Exactly, this is what I am doing right now (just for binaries, not execute Docker or Podman).

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        24 hours ago

        Yeah, probably, but thats not very common is it? Normally you’d just let the docker daemon handle the start/stop etc?

        • hades@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          23 hours ago

          I actually have no sense how common that is. My experience is with very small non-production docker environments, and with Kubernetes, but I have no idea what people typically do in between.

          • med@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            10 hours ago

            It’s common with rootless docker/podman. Something needs to start up the services, and you’re not using a root enabled docker/podman socket, so systemd it is.

        • hades@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          23 hours ago

          Well, someone needs to run docker compose up, right? (or you set restart policy, but that’s not always possible)