What to people use and recommend for this? I’ve read a bit about portainer, but I’m still learning - and don’t know what the best solutions are.
Today I have a handful of selfhosted services running on my home machine - mostly installed directly, but a couple running as docker containers. As the scale of my selfhosting has grown, I’ve realized that things would be a lot easier to manage if each service was run as its own container, so that installed services are isolated.
The solution I’m looking for would make it easy (possibly a web UI) for me to monitor, modify, update, and remove containerized services, including networking and storage.
Edit: Also I would only want a FOSS solution.
I’ve read a bit about portainer, but I’m still learning
I started with Portainer, and I still use it. It checks all the boxes for me. I would be remiss if I didn’t mention there are other such platforms to manage Docker containers with such as Podman, Dockage, etc. Like I said, I started with Portainer, and I know how to drive that bus, so I stuck with it.
In your shoes (and, in fact, in mine) I’d try to move away from interactive tools and into file-driven ones.
Personally I use nixos, run WUD (what’s up docker) to be notified of available updates, and manually test/update the containers once in a while (every couple weeks or so?)
There are a bazillion other solutions (from stuff like ansible/chef/puppet, to docker-compose, to kubernetes, to… a hand-written bash script) - the idea is to setup stuff via files that you can version, reference and write comments in rather than using some gui for interactive steps that you’ll forget to document in some wiki.
Monitoring is a whole different beast than configuring: you’ll be probably better off using something that does just that instead of some all-in-one solution. Try looking into something like beszel before going for the full prometheus/graphana stack.
try NixOS
all your containers and other services will be managed through one re-usable file
if your server is >= 8GB then proxmox gives a nice interface builtin. i use it to make nixos lxc containers in which i run my containers. which does actually make sense
deleted by creator
you will have to spend a lot of time learning the Nix language
I’d say you shouldn’t use any system (be it nixos, ansible or even bash scripts) if you are not willing to learn it.
That said, I too find pre-made modules less useful that I initially thought when I got into nixos: unless you want to do very basic stuff, a lot of times it’s easier to just generate whatever scripts/configuration files you need directly (using one of the trivial builders in lib or writing a custom derivation) rather than learning how the corresponding nixos module works.
One could say nixos modules make easy things slightly easier, and hard things much harder (this is adapted - possibly imprecisely - from a quote on ORMs, I think by Joel Spolsky).
I personally like dockge, it’s simple and lightweight and I like the fact that the webui has a good phone interface.
Dockhe is awesome. You can edit the docker-compose files from its interface and it makes managing containers very easy.
I’d absolutely recommend Kubernetes (k3s/rke2) or podman quadlets. Quadlets are a lot easier to get started with, but are still very flexible.
I’d recommend against using portainer. I tried it quite recently and I did not like it at all. A lot of features are paywalled, and was overall just a frustrating experience. I’ve heard it was a lot better some years ago.
Might take a little bit of effort to do a conversion if you’re locked into explicitly how Docker interacts with OCI containers, but over in the Podman camp you have two options.
- Cockpit with the Podman containers interface: a graphical web-based solution for managing podman containers and the rest of the system.
- Podman Quadlets: a config file-based way to manage Podman containers, volumes, pods, networks with custom SystemD units. Great if you want to version control your deployments.
Other than that, the more usable solutions I’ve tried of graphical Docker container management interfaces would be the ones in Unraid and Proxmox, though those solutions may not be suitable depending on your use case and have their own caveats to be aware of.
Im not locked into docker, but it’s what I have experience with so far, and a lot of services seem to have docker installation as a default option.
Do you think those things make it difficult to switch to podman? What are the differences?
Starting with confirmation of what others have said, yes you can use compose tools with Podman and Podman can hook directly with Docker Compose (the tool), but it really isn’t recommended. Compatibility with compose now is better than it used to be, but there are still edge cases. For a lot of projects that just pre-write a compose file that they expect to cover the general use case of their container, you’re best to take the compose file and write it out to Quadlet unit(s).
Other differences not mentioned can include:
- Podman alongside containers has optional pods, which let you wrap multiple containers together, sharing the same IP internally. Useful for having a service and their sidecar containers (e.g. Valkey, Postgres, Meilisearch, etc.) be bundled under the same IP address and simply reference each other as
localhost,127.0.0.1, or::1. If you utilize pods for certain split-container applications, you may need to remap certain service ports as they can overlap and cause binding failures. - Podman has multiple networking modes. If you use Podman at the system level (rootful) like Docker expects you to, you’re not really going to encounter any quirks with the default networking setup. Per-user Podman (rootless) defaults to using the Pasta backend for networking, which is still very highly performant, but is a bit clunky to configure (if ever actually necessary) and inter-pod communication can be difficult to get right. Alternatively, registering rootless pods with a bridge network makes inter-pod communication easy, but can cause problems if accurate source IPs are needed (e.g. upstream reverse proxies, accurate client IP logging, etc.).
- Because Podman is daemonless, there is also no persistent API socket loaded by default (an intentional security choice). For both rootful and rootless containers, you can enable this manually and mount it to containers that need it. For containers that expect docker.sock explicitly for API manipulation, your mount will need to reflect the name change of the podman.socket to what the container expects.
- Podman by default won’t shorthand container pulls from docker.io by default: a sin I see constantly done in so many compose files. When pulling a container from DockerHub, you need to put the
docker.io/prefix, just as you would but the appropriate prefix with Quay, Github, Gitlab, or any other distributor. - Podman can optionally let you auto-update containers based on the release tag specified for the container.
- Because of Podman’s integration with SystemD, a lot of oddball integrations (external cron jobs, one-shot services, etc.) can be pulled together with extra SystemD units (services, timers, etc.).
- Podman alongside containers has optional pods, which let you wrap multiple containers together, sharing the same IP internally. Useful for having a service and their sidecar containers (e.g. Valkey, Postgres, Meilisearch, etc.) be bundled under the same IP address and simply reference each other as
Docker’s main advantage is just being more well known and hence more supported as a default option.
Even then, I feel that this availability of docker compose files is an illusion, due to their verbosity and limitations inherent to docker. Less granular control of permissions, clunkiness in updating images, and multi container stacks feeling like an afterthought.
In pretty much all other ways podman feels superior. Cockpit provides a basic web gui, but quadlets are the main draw. Way easier to configure, explicitly designed for multi containers, and updating all images is a single command.
Roughly, the different ecosystems from least to most complex are:
Docker/Portainer -> Podman/Cockpit/Quadlets -> Kubernetes
You can use the same containers with Podman, but docker-compose is not recommended with Podman and you rather use Quadlets which integrate nicely with Systemd.
Docker Compose and CLI.
Indeed, I’d agree, it is the way I do it
As a tinkerer, I have tried Portainer a couple of times, and another similar thing, but I end up never looking at them, and revert to just jumping into the command line. A bonus of this approach is keeping a copy of all my compose files in a repo.
If OP is being drawn to this because they want to know everything’s running, what they’re really looking for is monitoring - probably Uptime Kuma.
This is the way I figured I’d go down at first, but I’m also curious if there’s a popular solution I could manage remotely in a browser without having to ssh, for example
Dockge - https://github.com/louislam/dockge
Docker compose with webui and upgrade button.
I’ll second Dockge. It works alongside Docker containers and doesn’t try to shove configs into nonstandard locations and whatnot. Plus if you have multiple Docker instances, you can install Dockge on each of them and link them all together. Very handy.
Thanks, I’ll look into this
Kubernetes. For a homelab, the stripped-down k3s is fantastic and surprisingly easy to get going.
Once you’ve got Kubernetes set up, you can lean on all the many tools already out there for things like deploying complex projects (Helm) and monitoring (Prometheus/Grafana). OpenLens is a nice piece of software you can use to monitor and control your cluster too, as is k9s.
This is how I went and what I’d recommend. But that said, it’s a bit of a steep learning curve as not everything in the self hosted/home lab community comes with helm charts.
What do you use for repeatable recovery and deployment of systems?
I’ve looked at ArgoCD and FlexCD. ArgoCD was too flaky. When I made changes to helm files it would often fail to deploy them and the UI often wouldn’t really show the detailed errors from things like helm syntax errors, so it was a pain to troubleshoot.
FlexCD was just really a pain to configure in the first-place and I didn’t want to learn kustomize when I already have helm charts.
And neither really supported staged deployments or dealt with dependant services well. So I couldn’t get it to deploy the infrastructure level helm charts like PostgreSQL before deploying the services that depend on it. Technically, with Kubernetes it shouldn’t matter about the order of deployment but in reality when ArgoCD would deploy the other stuff first and wait for it to come up and it never came up because the dependencies weren’t there, it caused it to choke a lot.
Just an example of the issues I’ve had. But I really want an easy way to make lots of small changes to charts and deploy them quickly as well as being able to quickly recover the cluster from backups if something catastrophic happens like a fire without having to manually deploy each chart. Just curious how others handle it or if it’s always manual deployment of charts via CLI only.
I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run
helm install ...either directly or via the CI and stuff is deployed.So for example at my last job, our GitLab CI just had a section triggered exclusively for merges into
masterthat ranhelm install ...for all three environments. We had threevalues.yamlfiles, one for each environment, and when we wanted to deploy a new version, the process was:- Create a tag for our release version (ie.
1.2.3) and push it to the repo. This would trigger a build and push the resulting image into the container registry. - Push an update to the repo with the new tag set in the appropriate Helm values file. If we wanted to deploy
1.2.3todevelopmentbut not yet tostagingorproduction, then thetag:value in each of the environment files would look like this:
k8s/chart/environments/development.yaml:tag: 1.2.3k8s/chart/environments/staging.yaml:tag: 1.2.2k8s/chart/environments/production.yaml:tag: 1.2.2
Once that change is pushed, the CI will automatically apply it with
helm install ...and make sure that all three environments are what they’re supposed to be.As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:
The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:
#!/bin/sh while ! nc -z postgres 5432; do echo "Waiting for postgres..." sleep 0.1 done echo "PostgreSQL started" touch /tmp/ready exec "$@"I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.
- Create a tag for our release version (ie.
If you want robust (and a ton to learn) go with k3s for a lightweight Kubernetes deployment and FluxCD.
If you want simpler go with docker-compose and doco-cd.
With a GitOps workflow you define it all in files in a bit repo then the server automatically deploys and updates. IMHO its much easier to maintain long term than click ops.
I’ve heard Portainer is pretty nice for this.
That’s what came up in my search at first. Seems legit.
Proxmox
Thanks, I’ve heard of this too. Its hard to tell what the differences in use-case all of these are. I’ll have to do more research into how they work.
There are nice friendly frontends for this, Yunohost or CasaOS spring to mind but might be too simplistic if you already are familiar with Docker.
Dockhand is great. I haven’t touched Portainer ever since.
https://github.com/Finsys/dockhandThanks, what have you liked about switching to this from portainer?
I concur with the other user: the logs are much easier to access and organized. The compact feel is much more suited to my preference.
I just recently switched from portainer to dockhand and I really like it. The UI is great and the setup and config wasn’t too complicated. I like that I can put both of my servers into one instance and can update all of my containers from dockhand vs manually. The other thing I like is being able to view the logs for my containers. Idk if it’s a me thing, but whenever I would try to view logs in portainer I would never be able to scroll up as it would update and send me back to the bottom. Again, I could’ve just been doing something wrong, but it always bothered me and I don’t have that issue with dockhand.
That looks pretty good. Looks like Portainer is getting replaced this weekend.
Looks nice but what kind of license is that?
I’ll second podman quadlets. Good security, full integration with systemd, pods allow applications to easily share a namespace, and you can manage graphically through Cockpit if you really want to.
Do you have an example of quadlets you defined that share a namespace?
systemd integration would be nice















