I recently replaced an ancient laptop with a slightly less ancient one.
- host for backups for three other machines
- serve files I don’t necessarily need on the new machine
- relatively lightweight - “server” is ~15 years old
- relatively simple - I’d rather not manage a dozen docker containers.
- internal-facing
- does NOT need to handle Android and friends. I can use sync-thing for that if I need to.
Left to my own devices I’d probably rsync for 90% of that, but I’d like to try something a little more pointy-clicky or at least transparent in my dotage.
Edit: Not SAMBA (I freaking hate trying to make that work)
Edit2: for the young’uns: NFS (linux “network filesystem”)
Edit 3: LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect questions)
Last Edit: thanks, friends, for this discussion! I think based on this I’ll at least start with NFS + my existing backups system (Mint’s thing, which is I think just a gui in front of rcync). May play w/ modern SAMBA if I have extra time.
Ill continue to read the replies though - some interesting ideas.
I use a samba mount behind a VPN.
You should take a look at webDAV
NFS is really good inside a LAN, just use 4.x (preferably 4.2) which is quite a bit better than 2.x/3.x. It makes file sharing super easy, does good caching and efficient sync. I use it for almost all of my Docker and Kubernetes clusters to allow files to be hosted on a NAS and sync the files among the cluster. NFS is great at keeping servers on a LAN or tight WAN in sync in near real time.
What it isn’t is a backup system or a periodic sync application and it’s often when people try to use it that way that they get frustrated. It isn’t going to be as efficient in the cloud if the servers are widely spaced across the internet. Sync things to a central location like a NAS with NFS and then backups or syncs across wider WANs and the internet should be done with other tech that is better with periodic, larger, slower transactions for applications that can tolerate being out of sync for short periods.
The only real problem I often see in the real world is Windows and Samba (sometimes referred to as CIFS) shares trying to sync the same files as NFS shares because Windows doesn’t support NFS out of the box and so file locking doesn’t work properly. Samba/CIFS has some advantages like user authentication tied to active directory out of the box as well as working out of the box on Windows (although older windows doesn’t support versions of Samba that are secure), so if I need to give a user access to log into a share from within a LAN (or over VPN) from any device to manually pull files, I use that instead. But for my own machines I just set up NFS clients to sync.
One caveat is if you’re using this for workstations or other devices that frequently reboot and/or need to be used offline from the LAN. Either don’t mount the shares on boot, or take the time to set it up properly. By default I see a lot of people get frustrated that it takes a long time to boot because the mount is set as a prerequisite for completing the boot with the way some guides tell you to set it up. It’s not an NFS issue; it’s more of a grub and systemd (or most equivalents) being a pain to configure properly and boot systems making the default assumption that a mount that’s configured on boot is necessary for the boot to complete.
Thanks for that caveat. I could definitely see myself falling into that
Yeah, it’s easy enough to configure it properly, I have it set up on all of my servers and my laptop to treat it as a network mount, not a local one, and to try to connect on boot, but not require it. But it took me a while to understand what it was doing to even look for a solution. So, hopefully that saves you time. 🙂
I still use sshfs. I can’t be bothered to set up anything else I just want something that works out of the box.
Isn’t that super clunky ? I keep getting all kind of sluggishness, hangs and the occasional error every time I use that. It ends up working but wow, does it suck.
I mostly use samba / cifs clients and it’s fast and reliable with properly setup dns and using only the dns or IP address, not smbios or active directory those are overkill
I like the sound of that!
However it looks like has a lot of potential for a ‘xz’ style exploit injection, so I’ll probably skip it.
From the project’s README.md : The current maintainer continues to apply pull requests and makes regular releases, but unfortunately has no capacity to do any development beyond addressing high-impact issues. When reporting bugs, please understand that unless you are including a pull request or are reporting a critical issue, you will probably not get a response.
I am 100% open to exploring other equally zero effort alternatives if only I had the time CURSE being an adult (ノಠ益ಠ)ノ . Is there anything better I should use, hopefully using existing ssh keys please.
For smaller folders I like using syncthing, that way it’s like having multiple updated backups
Syncthing is neat, but you shouldn’t consider it to be a backup solution. If you accidentally delete or modify a file on one machine, it’ll happily propagate that change to all other machines.
You can turn off “delete”, but modification is a danger, it’s true.
Turning off delete makes it excellent for eg. backing up photographs on your phone. I’ve got it doing this from my Android to my raspberry pi, which puts them on my NAS for me. Saves losing all my pictures if I lose my phone.
I like this solution because I can have the need filled without a central server. I use old-fashioned offline backups for my low-churn, bulk data, and SyncThing for everything else to be eventually consistent everywhere.
If my data was big enough so as to require dedicated storage though, I’d probably go with TrueNAS.
I use NFS for linking VMs and Docker containers to my file server. Haven’t tried it for desktop usage, but I imagine it would work similarly.
I use sshfs.
For linux only, lan only shared drive NFS is probably the easiest you’ll get, it’s made for that usecase.
If you want more of a dropbox/onedrive/google drive experience, Syncthing is really cool too, but that’s a whole other architecture qhere you have an actual copy on all machines.
If you already know NFS and it works for you, why change it? As long as you’re keeping it between Linux machines on the LAN, I see nothing wrong with NFS.
Isn’t nfs pretty much completely insecure unless you turn on nfs4 with Kerberos? The fact that that is such a pain in the ass is what keeps me from it. It is fine for read-only though.
It is, but nfsv3 is extremely easy to configure. You need to edit 1 line in 1 file and it’s ready to go.
Would be fine for designated storage networks that use IP whitelists.
Other than that, you kind of need user specific encryption/segregation (which I beliege Kerberos does?)If you’ve got Tailscale it’ll build WireGuard tunnels directly over the LAN: I actually do this with Samba for Time Machine backups on macOS.
Obviously the big bonus is being able to do the same over the internet without the gaping security holes.
(I used to use split DNS so that my LAN’s router’s DNS server returned the LAN IP, and Tailscale’s DNS server returned the Tailscale IP. But because I’m a privacy geek I decided to make it Tailscale-only.)
TrueNas is pretty top notch and offers a variety of storage and protocol options. If you’re at all familiar with Linux style OS, it should be pretty easy to work with. Setting up storage comes with a little bit of a learning curve, but it’s not too bad. This SAN/NAS OS is polished, performant, and extensible. If you’re not planning on using SMB or Samba, you can most certainly use NFS, or iSCSI if that’s your thing.
For all its flaws and mess, NFS is still pretty good and used in production.
I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.
The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.
The only cool new alternative I can think of is, use btrfs or ZFS and
btrfs/zfs send | ssh backup btrfs/zfs recvwhich is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.
Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.
Check out SyncThing, which can sync a folder of your choice across all 3 devices
[edit] oops, just saw you don’t plan on using it
In that case, if you use KDE, you can use Dolphin to set up network drives to your local network machines through SSH
NFS is still useful. We use it in production systems now. If it ain’t broke, don’t fix it.
And if you have a dedicated system for this, I’d look into TrueNAS Scale.
Truenas Scale works well as long as you don’t want any dockers on it. Once you want to run docker images it is easier to install a VM on Truenas and run the docker from there than it is to try to set up custom “Apps”
Wut? I’ve got a bunch of dockerhub images running on a scale box
It is doable, but it is a pain if the docker requires any special config like permanent storage. Getting nginx up and running for mTLS was especially annoying
NFS is still the standard. Were slowly seeing better adoption of VFS for things like hypervisors.
Otherwise something like SFTPgo or Copyparty if you want a solution that supports pretty much every protocol.
I would say SMB is more the standard. It is natively supported in Linux and works a bit better for file shares.
NFS is better for server style workloads
If it’s for backup, zfs and btrfs can send incremental diffs quite efficiently (but of course you’ll have to use those on both ends).
Otherwise, both NFS and SMB are certainly viable.
I tried both but TBH I ended up just using SSHFS because I don’t care about becoming and NFS/SMB admin.
NFS and SMB are easy enough to setup, but then when you try to do user-level authentication… they aren’t as easy anymore.
Since I’m already managing SSH keys all over my machines, I feel like SSHFS makes much more sense for me.
I think ZFS send/receive requires root which can be an issue for security
Stick with NFS, and use e.g. rsync for backup. Or subversion, if you want to be super-safe.















