With the recent discussions around replacing Spotify with selfhosted services and the possibilities to obtain the music itself, I’ve been finally setting up Navidrome. I had to do quite a bit of reorganization to do with my existing collection (beets helping a ton) but now it’s in a neatly organized structure and I’m enjoying it everywhere. I get most of my stuff from Bandcamp but I have a big catalog from when I’ve still had a large physical collection.

I’m also still working on my docker quasi gitops stack. I’ve cleaned up my compose files and put the secrets in env files where I hadn’t already, checked them into my new forgejo instance and (mostly) configured renovate. Komodo is about to get productive but I couldn’t find the time yet. Also I need to figure out how to check in secrets in a secure way. I know some but I haven’t tried those with Komodo yet. This close of my fully automated update-on-merge compose stacks!

I’ve also been doing these for quite a while and decided to sometimes post them in !selfhosting@slrpnk.net to possibly help moving a bit from the biggest Lemmy instance, even though this community as it is is perfectly fine as well as it seems.

What’s going on on your servers? Anything you are trying to pursue at the moment?

  • csm10495@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I have a couple pis that run docker containers including pihole. The containers have their storage on a centralized share drive.

    I had a power outage and realized they can’t start if they happen to come up before the share drive PC is back up.

    How do people normally do their docker binds? Optimally I guess they would be local but sync/backup to the share drive regularly.

    Sort of related question: in docker compose I have restart always and yet if a container exits successfully or seemingly early in it’s process (like pihole) it doesn’t restart. Is there an easy way to still have them restart?

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      You should be able to modify the docker service to wait until a mount is ready before starting. That would be the standard way to deal with that kind of thing.

      • csm10495@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        What if it’s a network mount inside the container? Doesn’t the mount not happen till the container starts?

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Correct yeah, you’d still need a way on the host to check if the mount is ready though before starting the service. Or you could just do a fixed delay time.

  • machiavellian@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I am at the very beginning of my journey taking those first baby steps. As I don’t yet understand all the sysadmin stuff, I’m treading rather carefully to avoid making unfuckable mistakes.

    I recently switched to Void on my daily driver so it has been a bit of a trial to get used to a new OS and configure it correctly. Nevertheless, it’s been a great learning experience.

    Alongside it I’ve downloaded OpenWrt on my router and begun to configure it as well (still need to deal with the Wireguard and Unbound config).

    For the actual server I managed to secure an old Dell Optiplex. In the near future, I plan to flash it with Libreboot and then install Debian or FreeBSD (apparently great ZFS support) on it. Though I’ve still no idea whether I should use Proxmox and how I should format my drives (one 500GB SSD and 4TB HDD) for maximum effiency and for the possibility of later easily upgrading my storage capacity.

    When I’ve finally past these steps, I plan to selfhost music services, as well as few other basic services. My goal at the moment is to replace Spotify for my whole family. But it’s still a long way to go.

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I’m in planning for upgrading my NAS. It has a 10Gbps fiber connection, and my main workstation does, as well. My goal is to be able to saturate that with both read and write speed. Timeline is 6 to 8 months out.

    Budget in the range of $2000-3000. Currently doing RAID1 on a pair of 18TB disks. I usually want to double that with each upgrade, but there’s some leeway on there.

    I think my best option is 6 NVMe sticks on RAID6. 8TB sticks would give 32TB of usable space. Not quite double, but close enough.

    I would like easy hot swap capabilities. Unfortunately, it looks like the only option for that would be Icydock, and those are expensive. The other way is to go down to SATA drives where relatively cheap 2.5" hot swap bays exist, but a setup that can saturate 10Gbps writes with reasonable redundancy would be even more expensive.

    Need a motherboard that has a pair of 16x slots. One needs to be a GPU for Jellyfin transcoding. Also need a 4x slot for a 10Gbps sfp+ NIC. With two NVMe slots on the mobo, this should be workable without going to Threadripper or Epyc chips and such–idle power consumption sucks on those. Totally giving up on hot swap here, though.

    There are 8tb NVMe sticks that are priced close to fit in this budget range. I had found one Samsung stick that, according to Amazon price trackers, was around $300 in the recent past (can’t seem to find it now). A lot will depend on tariffs, of course.

    One surprise is that a Kioxia CD6-R u.3 drive at 15.36TB goes for $1150. 4 drives on RAID10 would be a workable space upgrade. That setup would be out of budget, but not as much as I would have expected. Referb deals or future price movement might put it in range.

  • confusedpuppy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I feel like my little Pi server is set up nicely now. At least I’m at the point where I’m not concerned about technically maintaining it. It’s as secure as I want it to be and I’ve tweaked my maintenance scripts slightly to avoid any unexpected issues.

    I tried installing snikket but I couldn’t figure out how to get it to work with my Caddyfile using my current wildcard domain cert configuration. I’ll try again another time when I’m motivated again. It’s a low priority to me.

    The last changes I made were adding logs and making them accessible to myself. So far they are all boring and predictable. Which is good news. It’s also nice to see that I’m the only person accessing it. The bots haven’t found my little corner of the internet yet.

    Right now I’m taking a break from self-hosted stuff to work on my gardens and two artsy projects. A wooden carving for a friend’s birthday and an overly complicated shell script that has no real purpose. Although I’ve learned lots from it already so it’s not a complete waste of time.

      • confusedpuppy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Since my logs barely move, I just made aliases to where the logs are so it’s quick display and scan them within the terminal. I’m basically just viewing the system logs, fail2ban log and Caddy’s log so it’s fairly quick and simple for me.

        The only change I’d like to do is change the output of Caddy’s log file so it’s not a long single line of information per output. I’ll have to do a bit more reading on that so I know what information I want to keep and how I want to visually organize it. At least for the moment, I am familiarising myself with what I am looking at and am slowly figuring out what information is relevant to me.

        I like to keep my systems as simple and lean as possible which seems to strongly reflect my general approach to life. I find that kind of interesting.

        • tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          If you like, check GoAccess on the Caddy Files. You can watch them through that instead of less/cat/whatever to see a nice Dashboard. It helps getting a better overview IMHO.

          • confusedpuppy@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            It looks interesting and seems like it would be easy to set up. I’ll play with it and see how I like it. Thanks for the suggestion

        • Jason2357@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I understand that COW file-systems can do snapshots at “instantaneous” points in time and KVM snapshots ram state as well, but I still worry that a database could be backed up at just the wrong time and be in an inconsistent state at restore. I’d rather do less frequent backups of a stopped VM and be more confident it will restore and boot correctly. Maybe I’m a curmudgeon?

  • Kaldo@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Just got a domain and started exposing my local jellyfin through cloudflare, mostly wanting to listen to my music on my phone when i’m outside too.

    I followed some guides that should make it fine with cloudflare’s policy, video doesnt work when i tried it but otherwise its been fun despite me feeling like im walking on eggshells all the time. I guess time will tell if it holds up

    • Batman@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Some things which have caused issues for me:

      File permissions

      Video/audio format (264/aac stereo is best for compatibility)

      • Kaldo@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        Oh file permissions are a nightmare to me, I thought I managed to get it sorted but after i installed lidarr, it alone suddenly can’t move files out of the download location anymore. I even tried to chmod 777 the data folders and nothing. I dont think I quite have the grasp on how those work with docker on linux yet, it seems like those arr services also have some internal users too which I dont get why would they.

        Wdym with the formats, is this referring to transcoding? I kept those on defaults afaik

        • raldone01@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          In linux user and group names don’t matter. Only the gid and uid matter. Think of user and group names as human names like domains are for IPS.

          In docker when you use mounts, all your containers that want to share data must agree on the gid and uids.

          In rootless docker and podman things subuids and subgids make it a little more complicated since IDs get mapped between host and container, but its still the IDs that matter.

          • Kaldo@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            I have one .env file with UUID/GUID 1000 set for all docker services in the docker-compose so it would make sense in theory if that’s enough, but it seems it rarely is…

        • h0rnman@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Could be that lidarr is setting its own permissions for downloaded stuff (look for something like dmask or fmask in the docker config). You might also need to chmod -R so it hits all sub folders. If you have a file or directory mask option, remember that they’re inverse, so instead of 777, you’d do 000 for rwxrwxrwx.

          • Kaldo@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            6 months ago

            You might be onto something, lidarr does have UMASK=002 setting in the .env file. I think the issue is when sabdnzbd puts the files and then lidarr can’t read them, so what exactly is the expected permission setting then in this case? If I put it to 000 for lidarr, won’t other services then be unable to add the files there?

            I always feel so dumb when it comes to these things since in my head it’s something that should be pretty straightforward and simple, why can’t they all just use the same user and share the same permissions within this folder hierarchy…

            • h0rnman@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              Sab might have its own mask settings - it would be worth looking at. Same thing applies here - subtract the mask part from 7 to get the real permissions. In this case, mask 002 translates into 775. This gives the uid and gid that the container is running under (probably defined in a variable somewhere) Read/Write/Execute, but anyone else Read/Execute. The “anyone else” would just be any account on the system (regardless of access method) that didn’t match on the actual uid or gid value.

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I’m setting up a yunohost machine for my brother as a birthday present. I got him a domain good for 10 years, and installed nextcloud and Jellyfin with some home videos digitized from our parents’ vhs tapes.

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      It’s a tool that checks and corrects metadata for your music collection. You can also import music with it to your collection (it will put everything in the right folders etc).

      It does require some manual intervention now and then, though (do you really want to apply this despite some discrepancies? Choose, which of these albums it really is. Etc).

  • WbrJr@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    So I am in a vicious cycle. I start doing something, notice there is a better way, change my setup and restart. So from just Ubuntu server, I developed to proxmox. From documenting everything manuall in joplin, i am now using ansible. I started with wireguard, then tailscale with selfhosted headscale. I try to get my setup right on the first try, which i notice is stupid as I am writing. It just hinders me to make progress. I think I should rather try to get it up and running as fast as possible (and securely of cause) to make progress and fail fast maybe? And I like all the changes I made, I think they were the right choice, but its a bit tiering. And I like ansible, I just have the urge to automate absolutely everything, so I can redeploy everything right after I installed proxmox. Which is not necessary at all at this stage, idk :D Maybe someone has some tips how to overcome perfectionism?

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      For me, tinkering is part of the process and I’m enjoying it. Deciding to do something differently and changing a lot of stuff every now and then is fine. What’s annoying is if you are in the middle of such a process and then run out of (free) time. Next time I look at it I forgot half of it if it’s not finished and documented.

  • oddlyqueer@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I finally set up Jellyfin and Sonarr! I’ve been using Plex and manually managing torrents for a while now, recently found the *arr services and they are very impressive. Got the Jackett - Sonarr - Jellyfin - Nginx stack set up, now working on getting SSL + DynDNS so I can make it available remotely. Also accidentally blasted my ratio downloading a bunch of TV shows all at once so gotta seed up for a bit before i fill it out more. But so far the setup has been pleasantly breezy for how complex a setup it is ❤️

      • oddlyqueer@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Good question; I did not know what Emby is until just now. I will explore it some more, I’m having issues getting the jellyfin ios/android clients to connect consistently to my server so I might ultimately do that instead / in parallel but I’m leery of freemium solutions.

        • AtariDump@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I get that; I’ve tried Jellyfin and it’s just not (IMHO) mature enough as a Plex replacement. Emby comes pretty close.

          Note: I’m a plex lifetime subscriber, Emby free user, and Jellyfin user.

          • oddlyqueer@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            Nice, I appreciate the analysis. I’m still early enough on with Jellyfin that I’m still willing to ascribe every issue to user error but I think I see what you mean. But I keep telling myself that I will contribute to a large multi-dev OSS project at some point and still never have; contributing code in public is still kinda nerve-wracking. maybe if I have a selfish enough reason to fix something I’ll finally push through that 😆

  • async_amuro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Just ordered a used HP EliteDesk 800 G3 SFF (3.6GHz Intel Core i7-7700, 8GB DDR4 RAM, 256GB SSD) off EBay to replace my Apple Mac mini “Core i7” 2.3 (Late 2012/Server). Hoping to put 32GB of RAM in it, 1TB NVMe boot drive and maybe a 3.5” HDD for media instead of using an external drive. Might move to NixOS (I’d like to learn how to administer Nix even though it’s very complicated sometimes) and Podman, instead of using Proxmox and Docker Debian VMs and LXC containers.

    Any advice and guidance appreciated!

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I got two of those for 100$ USD for the purpose of hosting openwrt in proxmox LXC containers. One thing I noticed is they have no cooling. I put a 10 GBe mellanox card in it plus a very low end radeon gpu and it gets quite hot in there. My recommendation, instead of trying to embiggen it as much as possible, by putting 2 more sticks of ram and the biggest cpu, I would recommend just buying another. The performance boost per dollar isn’t as much as the performance capacity of a second, third or 4th machine.

      • async_amuro@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Thanks for the recommendation, I got this one for just over $100 after tax. Space is an issue for me, so more machines isn’t the best and I can always keep the Mac Mini chugging if needed. I’ll probably only do the HDD/SSD and RAM upgrade, but it’s definitely worth keeping in mind if I throw a new NIC or GPU in it. I am thinking of putting a Noctua fan on the CPU cooler to keep it quieter and cooler!

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    H A R V E S T E R

    Lol

    But honestly got all of nodes (some new hardware, some minipcs, some old laptops, some ewaste servers, some raspberry pies, a VM off my Macbook), all in my harvestet cluster. I got Rancher running as a vcluster as well so messed some with Rancher provisioned rke2 clusters too.

    Played some with nutanix as a vm in that cluster (what a fing nightmare, anr not virtual hardware just Nutanix …). Playing with ESXI now (its not happy about my amd chips so far…). And also my virtual harvester cluster. Easy so far but i want to get more ambitoius in creating a mock deployment, network and all, so i can test crazier configs without losing a day to rebuilding a cluster via thumb drive again…

    Also managed some risk and got my ISP to let me do dual modems on the same bus and configed OpenWRT to load balance between them and via usb my wifi hotspot. Still working with them to try and get more IPs so can use the 4 total ports on my modem stacks to attach to both of my routers.

    I like tinkering with junk, so the other half of my hobby is just risk mitigation (which i also enjoy).

  • greybeard@feddit.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I spent some time last week learning both Ansible and Podman Quadlets. They are a powerful duo, especially for self hosting.

    Ansible is a desired state system for Linux. Letting you define a list of servers and what their configuration should be, like “have podman installed” and “have this file at this location with this content”.

    Podman quadlets is a system for defining podman containers as a service. You define the container, volumes, and networks all in essentially Systemd unit files.

    Mixing the two together, I can have my entire podman setup in a format that can be pushed to any server in seconds.

    And of course everything is text files that git well.

    • theorangeninja@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I was thinking about this for some time now, can you link me to some good tutorials about quadlets in particular? Ansible will have to wait for now.

    • shadowtofu@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I did the same last week (and am still in the process of setting up more services for my new server). I have a few VMs (running Fedora CoreOS, with podman preinstalled), and I use ansible to push my quadlets, podman secrets, and static configuration files. Persistent data volumes get mounted using virtiofs from the host system, and the VMs are not supposed to contain any state themselves. The VMs are also provisioned using using ansible.

      Do you use ansible to automatically restart changed containers after pushing your changes? So far, I just trigger a systemctl daemon-reload, but trigger restarts manually (which I guess is fine for development).

      • greybeard@feddit.online
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I haven’t gotten too far, but right now I’ve got persistent volumes being pushed by NFS from my NAS. I’m using rocky Linux VMs as my target, but for this use case, Fedora CoreOS should be the same.

        I haven’t yet tried using Ansible to create the VMs, but that would be cool. I know teraform is designed for that sort of thing, but if Ansible can do it, all the better. I’d love to get to a point where my entire stack as Ansible.

        I don’t yet have Ansible restarting the service, but that should be a simple as adding a few new tasks after the daemon-reload task. What I don’t know how to do is tell it to only restart if there is change to any of the config files uploaded. That would be nice to minimize service restarts.

    • powerofm@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Oh that’s smart! I just got started with podman and quadlets. Loving how simple it is to setup a systemd service and even organize multi-pod apps

  • Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    For privacy reasons, I have finally fully disabled dynamic dns updates and closed the last holes in the home firewall, moving to 100% proxying via a VPS for publicly available stuff, and a tailnet (headscale) for everything private. The only real cross-over is Nextcloud - mountains of private data, but I want it publicly available for file shares. Fortunately, Nextcloud has a setting to whitelist IP addresses that allow log-in, so I can restrict that to just the non-VPS tailnet addresses. From the public internet, only public shares are accessible.

    I set up a L4 proxy so that the encryption for Nextcloud happens at home and the VPS just passes encrypted packets. Then it occurred to me that a compromised VPS could easily grab a SSL cert for my Nextcloud subdomain via a regular-old http-challenge and MITM access to all my files, defeating the point.

    Then I found a neat hack that effectively disables http-challenge certs for subdomains by requiring a wildcard certificate - which can only be created with a dns-challenge. I was able to also disable all other certificate authorities. Obviously, I have /some/ trust in the VPS I administer - it’s on my tailnet network - but no longer have the concern that it could easily MITM Nextcloud. https://www.naut.ca/blog/2019/10/19/mitigating-http-mitm-possibilities-with-lets-encrypt/