

Quadlet


Quadlet


I use https://tuns.sh/ which doesn’t require any local installs to work since it’s just ssh. Its not as fast as vpn but its easy to use
I don’t really understand the question. All you have to do is run archinstall and then add a desktop environment like KDE and that’s like 80% what other distros do.
I think arch used to be hard to get started but not anymore. That’s reserved for gentoo now
Sorry but this is a ridiculous argument. What entity has dropped nukes on an entire population? Who is the current president of the US? Insane take.
There’s also archinstall which comes with the latest os image which is just like any other installer and holds your hand through the process.
It’s really very simple to get arch installed


My salt is just a memorized password I put in addition to the one stored in pass


This is what I do. If someone can figure out pass with my password protected gpg, plus my passwords are partials (I salt them), and otp then they can have my access
I have a simple bash script that manages folders and files with a way to route them to whatever location. Then I run the script and it does all the symlinking for me. This is what I do for systemd unit files and my own dotfiles
For anyone looking for a simple rss-to-email digest I recommend this service: https://pico.sh/feeds
Stand up a local lfs server or figure out a different way to store large files. I generally avoid lfs
Why not just run bare repos on your n100? That’s what I do. I have no need for a code forge with code collab when it’s just me pushing
https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
If you want a web viewer use a static site git viewer like https://pgit.pico.sh/
While not the same I use an rss-to-email service that hits the minimal sweet spot for me
It seems like there might be exceptions to the “no partial upgrades” which has not been discussed: you can pin your version of the kernel primarily to give time for packages like zfs to catch up to the latest kernel


I’ve never used bcachefs and only recently read about some of the drama. I wish the project the best but at this point it is hard to beat zfs
Here’s my journey from arch to proxmox back to arch: https://bower.sh/homelab
I was in your shoes and decided to simplify my system. It’s really hard to beat arch and I missed having full control over the system. Proxmox is awesome but it felt overkill for my use cases. If I want to experiment with new distros I would probably just run distrobox or qemu directly. Proxmox does a lot but it ended up just being a gui on top of qemu with some built in backup systems. But if you end up using zfs anyway … what’s the benefit?
If you want low effort high value then get a synology 2 bay. If you want full control over the host OS then run Debian/arch with zfs


I didn’t use any of the terms you used in your post. I’m not using those products in part for the reasons I discussed but also I don’t see it particularly useful beyond a cult of personality building it.


I went down a similar path as you. The entire proxmox community argues making it an appliance with nothing extra installed on the host. But the second you need to share data — like a nas — the tooling is a huge pain. I couldn’t reliably find a solution that felt right.
So my solution was to make my nas a zfs pool on my host. Bind mounting works for CTs but not VMs which is an annoying feature asymmetry. So I decided to also install an nfs server that exposed my nas.
I know that’s not what you want but just wanted to share what I did.
The feature asymmetry between CTs and VMs basically made CTs not part of my orchestration.
Librefox has been awesome. Once you get the hang of enabling cookies for specific sites it mostly just works. Although Fastmail keeps logging me out for some reason
As someone who implemented webauthn for my $work it was a terrible DX to setup. Webauthn requires an https domain so that alone is going to be a barrier for many self hosted services. Getting the configuration right will also be prohibitive.