

In addition to not trusting the privacy of stock firmware, OpenWRT provides a lot of useful features for self-hosting like local DNS for your services and a feature-rich firewall to, for example, block devices you don’t trust from phoning home.


In addition to not trusting the privacy of stock firmware, OpenWRT provides a lot of useful features for self-hosting like local DNS for your services and a feature-rich firewall to, for example, block devices you don’t trust from phoning home.


I just spun up Forgejo and pushed up all my repos by hand because I’m lazy.


Yeah, either a rigged primary or no primary. I assume they’re planning to pick Patrick Bateman (a.k.a., Newsom) for us next time one way or another while preaching at us that “democracy is on the ballot”.
Just finished watching The Dinosaurs series, narrated by Morgan Freeman. I enjoyed the series overall, though I do find it difficult to suspend my disbelief and stop wondering what shit they completely made up, what has a firm scientific basis, and the extent to which the current understanding will be laughable in 20 years.


deleted by creator


Postman never appealed me for these exact reasons, and I usually just use curl, but this looks like a great option.


I have a TP-Link router with OpenWRT and use it to make local DNS entries for my services, like jellyfin.lan and forgejo.lan. I’m also running k3s, which comes with Traefik as a built-in reverse proxy.


I run my VPN via OpenWRT, with rules setup per device that either routes traffic through the WAN or VPN interface. If the VPN is not working, there’s simply no outbound traffic. It’s more reliable than a kill switch.


I have a router with OpenWRT, which has great firewall capabilities that I use to block specific devices from having internet access while allowing them to connect to my local network. Useful solution for any device you want connecting to local services, but you don’t trust not to phone home.


deleted by creator
I asked a friend who owns a small plane whether it saves money vs. flying commercial, and the answer was no way, it’s for the freedom, convenience, and love of aviation. Realistically, self-hosting is the same, albeit a lot cheaper.
That being said, I wouldn’t mind having a runway in my back yard with a steel building to store a small plane, although the stakes are lower if you forget to maintain your server.
I self-host Forgejo and use its issues for this purpose, though it’s probably too simplistic based on your description.
The main thing that has stopped me from running models like this so far is VRAM. My server has a RTX 4060 with 8GB, and not sure that can reasonably run a model like this.
Edit:
This calculator seems pretty useful: https://apxml.com/tools/vram-calculator
According to this, I can run Qwen3 14B with 4B quant and 15-20% CPU/NVMe offloading and get 41 tokens / s. It seems 4B quant reduces accuracy by 5-15%.
The calculator even says I can run the flagship model with 100% NVMe offloading and get 4 tokens / s.
I didn’t realize NVMe offloading was even a thing and not sure if it actually is supported or works well in practice. If so, it’s a game changer.
Edit:
The llama.cpp docs do mention that models are memory mapped by default and loaded into memory as needed. Not sure if that means that a MoE model like qwen3 235b can run with 8GB of VRAM and 16GB of RAM, albeit at a speed that is an order of magnitude slower like the calculator suggests is possible.


deleted by creator


deleted by creator
If that happened, I’m sure there would be more motivation to clean up a lot of the rough edges, which would benefit the community as a whole.


Openimagedenoise is included in Blender for denoising Cycles renders, and it works quite well for that, IMO.
I had like 10 repos and nothing of much value in the DB, so it was quick to create the repos and push them up.