Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
Your phrasing of the question implies a poor understanding. There’s nothing preventing you from running containers on bare metal.
My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.
I think you’re actually asking why folks would use bare metal instead of cloud and here’s the truth. You’re paying for that resiliency even if you don’t need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I’ve stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.
Your phrasing of the question implies poor understanding.
Your phrasing of the answer implies poor understanding. The question was why bare metal vs containers/VMs.
The phrasing by the person you are responding to is perfectly fine and shows ample understanding. Maybe you do not understand what they were positing.
Pure bare metal is crazy to me. I run proxmox and mount my storage there, and from there it is shared to machines that need it. It would be convenient to do a pass through to TrueNAS, for some of the functions it provides but I don’t trust that my skills for that. I’d have kept TrueNAS on bare metal, but I need so little horsepower for my services that it would be a waste. I don’t think the trade offs of having TrueNAS run my virtualisation environment were really worth it.
My router is bare metal. It’s much simpler to handle the networking with a single physical device like that. Again, it would be convenient to set up opnsense in a VM for failover. but it introduces a bunch of complexity I don’t want or really need. The router typically goes down only for maintenance, not because it crashed or something. I don’t have redundant power or ISPs either.
To me, docker is an abstraction layer I don’t need. VMs are good enough, and proxmox does a good job with LXCs so far.
Why would I spin up a VM and virtual network within that vm and then a container when I can just spin up a VM?
I’ve not spent time learning Docker or k8s; it seems very much a tool designed for a scale that most companies don’t operate at let alone my home lab.
For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.
I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.
I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn
containerisation is to applications as virtual machines are to hardware.
VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )
Not so much a fake one but overlay the actual directory with specific needed files for that container.
Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.
So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?
For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yamlall the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line
volumes: - db_data:/var/lib/mysqlAs the compose file will also be in home/user/Wordpress/ you can drop the common path.
That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.
Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.
“Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.”
So that’s really why they should be good for Jellyfin/File servers, as the data isn’t needing to be stored in container, just the run files. I suppose the config files as well.
When I reverse proxy to my network using wireguard (set up on the jellyfin server, I also think I have a rustdesk server on there) on the other hand, is it worth using a container, or is that just the same either way?
I have shoved way to many things on an old laptop, but I never have to touch it really, and the latest update mint put out actually cured any issues I had. I used to have to reboot once a week or so to get everything back online when it came to my Pihole and shit. Since the latest update I ran in September 4th, I haven’t touched it for anything. Screen just stays closed in a corner of my desk with other shit stacked on top
I run my NAS and Home Assistant on bare metal.
- NAS: OMV on a Mac mini with a separate drive case
- Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible
Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it’s Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.
I’m curious why you feel these are easier to run on bare metal? I only ask as I’ve just built my first proxmox PC with the intent to run TrueNAS and Home Assistant OS as VMs, with 8x SAS enterprise drives on an HBA passed through to the TrueNAS VM.
Is it mostly about separation of concerns, or is there some other dragon awaiting me (aside from the power bills after I switch over)?
Anything I run on Proxmox, per my own requirements, needs to be hardware-agnostic. I have a 3-node cluster set up to be a “playground” of sorts, and I like being able to migrate VMs/LXCs between different nodes as I see fit (maintenance reasons or whatever).
Some services I want to run on their own hardware, like Home Assistant, because it offers more granular control. The Lenovo M710q Tiny that my HA system runs on, even with its i7-7700T, pulls a whopping 10W on average. I’ll probably change it to the Pentium G4560T that’s currently sitting on my desk, and repurpose the i7-7700T for another machine that could use the horsepower.
My NAS is where Im more concerned about separation of duties. I want my NAS to only be a NAS. OMV is pretty simple to manage, has a great dashboard, spits out SMART data, and also runs a weekly
rsyncbackup command on my RAID to a separate 8TB backup drive. I’m currently in the process of building a “new” NAS inside a gutted HP server case from 2003 to replace the Mac mini/USB 4-bay drive enclosure. New NAS will have a proper HBA to handle drives.or is there some other dragon awaiting me (aside from the power bills after I switch over)?
My entire homelab runs about 90-130W. It’s pulled a total of ~482kWh since February (when I started monitoring it). That’s 3x tiny/mini/micro PCs (HP 800 G3 i7, HP 800 G4 i7, Lenovo M710q i7), an SFF (Optiplex 7050 i7), 2014 Mac mini (i5)/loaded 4-bay HDD enclosure/8TB USB HDD, Raspberry Pi 0W, and an 8-port switch.
Wow, thanks so much for the detailed rundown of your setup, I really appreciate it! That’s given me a lot to think about.
One area that took me by surprise a little bit with the HBA/SAS drive approach I’ve taken (and it sounds like you’re considering) is the power draw. I just built my new server PC (i5-8500T, 64GB RAM, Adaptec HBA + 8x 6TB 12GB SAS drives) and initial tests show on its own it idles at ~150W.
I’m fairly sure most of that is the HBA and drives, though I need to do a little more testing. That’s higher than I was expecting, especially since my entire previous setup (Synology 4-bay NAS + 4x SATA drives, external 8TB drive, Raspberry Pi, switch, Mikrotik router, UPS) idles at around 80W!
I’m wondering if it may have been overkill going for the SAS drives, and a proxmox cluster of lower spec machines might have been more efficient.
Food for thought anyway… I can tell this will be a setup I’m constantly tinkering with.
There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.
What are you doing running your vms on bare metal? Time is a flat circle.
for work I have a cloud dev VM, in which I run WSL2. so there’s at least two levels of VMs happening, maybe three honestly.
@kiol I mean, I use both. If something has a Debian package and is well-maintained, I’ll happily use that. For example, prosody is packaged nicely, there’s no need for a container there. I also don’t want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples’ mailboxes. Since I’m still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.
@kiol On the other hand, for doing builds (debian packages and random other stuff), I’ll use podman containers. I’ve got a self-built build environment that I trust (debootstrap’d), and it’s pretty simple to create a new build env container for some package, and wipe it when it gets too messy over time and create a new one. And for building larger packages I’ve got ccache, which doesn’t get wiped by each different build; I’ve got multiple chromium build containers w/ ccache, llvm build env, etc
@kiol And then there’s the stuff that’s not packaged in Debian, like navidrome. I use a container for that for simplicity, and because if it breaks it’s not a big deal - temporary downtime of email is bad, temporary downtime of my streaming flac server means I just re-listen to the stuff that my subsonic clients have cached locally.
@kiol Syncthing? Restic? All packaged nicely in Debian, no need for containers. I do use Ansible (rather than backups) for ensuring if a drive dies, I can reproduce the configuration. That’s still very much a work-in-progress though, as there’s stuff I set up before I started using Ansible…
I use k3s and enjoy benefits like the following over bare metal:
- Configuration as code where my whole setup is version controlled in git
- Containers and avoiding dependency hell
- Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
- Declarative network policies with Calico, mainly to make sure nothing phones home
- Managing secrets securely in git with Bitnami Sealed Secrets
- Liveness probes that automatically “turn it off and on again” when something goes wrong
These are just some of the benefits just for one server. Add more and the benefits increase.
Edit:
Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬
pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.
and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.
until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.
/uj not really but that’d be sick as hell.
I just imagine what the output of any program would be. Follow me, set yourself free!
“What is stopping you from” <- this is a loaded question.
We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.
I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.
tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.
Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!
Honest response - respect.
What is stopping you from running HP-UX for all your workloads? The question is totally in purpose so that you’ll fill in what it means to you.
In my case it’s performance and sheer RAM need.
GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.
I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.
Can anyone confirm if containers would actually impact CPU to GPU transfers
To be clear, VMs absolutely have overhead but Docker/Podman is the question. It might be negligible.
And this is a particularly weird scenario (since prompt processing literally has to shuffle ~112GB over the PCIe bus for each batch). Most GPGPU apps aren’t so sensitive to transfer speed/latency.
I generally abstract to docker anything I don’t want to bother with and just have it work.
If I’m working on something that requires lots of back and forth syncing between host and container, I’ll run that on bare metal and have it talk to things in docker.
Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I’m messing with and it’s direct dependencies run outside.
Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.
However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.
Did you try compose scripts as opposed to
docker runI don’t know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.
If you think it’s likely that I followed some “how to get started with docker” tutorial that had completely wrong information in it, that just demonstrates the point I am making.
I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.
But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.
I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.
And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.
Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.
KISS
The more complicated the machine the more chances for failure.
Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.
Depending on your use case that could be very important







