I just pushed v22 of my project : a local AI companion for Radarr, that goes beyond generic genre or TMDb lists.

This isn’t “yet another recommender”. It’s your personal taste explorer that actually gets the vibe you want in natural language and builds recommendations starting from your existing library.

Key highlights from a real recent run:

  • Command: --mood "dystopian films like Idiocracy, Gattaca or In Time"
  • Output: Metropolis (1927), V for Vendetta, Children of Men, Brazil (1985), Minority Report, Dark City, Equilibrium, Upgrade, The Road… → oppressive/surveillance/inequality/societal critique atmosphere, not just “dark sci-fi”.

How it works :

  • Starts by sampling random movies from your Radarr collection (or uses your mood/like/saga input).
  • Asks a local Ollama LLM (e.g. mistral-small:22b) for 25 thematic suggestions based on atmosphere/vibe.
  • Validates each via OMDb (IMDb rating, genres, plot, director, cast…).
  • Scores intelligently: IMDb rating + genre match + director/actor bonus + plot embedding similarity (cosine on Ollama embeddings).
  • Adds the top ones directly to Radarr (with confirmation: all / one-by-one / no).
  • Persistent blacklist to avoid repeats.

Different modes :

  • --mood "dark psychological thrillers with unreliable narrators" , any vibe you describe
  • --like "Parasite" --mood "mind-bending class warfare" (or just --like "Whiplash")
  • --saga (auto-detects incomplete sagas in your library and suggests missing entries) or --saga "Star Wars"
  • --director "Kubrick" / --actor "De Niro" / --cast "Pacino De Niro" (movies where they co-star)
  • --analyze → full library audit + gaps (e.g. “You’re missing Kurosawa classics and French New Wave”)
  • --watchlist → import from Letterboxd/IMDb
  • --auto → perfect for daily cron / Task Scheduler (wake up to 10 fresh additions)

Standout features:

  • 100% local + privacy-first (Ollama + free OMDb API only)
  • No cloud AI, no tracking
  • colored console output, logs, stats, HTML/CSV exports
  • Synopsis preview before adding
  • Configurable quality profile, min IMDb, availability filters
  • Works on Windows, Linux, Mac

GitHub (clean single-file Python script + detailed README):
https://github.com/nikodindon/radarr-movie-recommender

If you’re tired of generic Discover lists, Netflix-style randomness, or manual hunting give it a spin. The vibe/mood mode + auto saga completion really change how you expand your collection.

Let me know what you think, any weird mood examples you’d like to test, or features you’d want added!

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 days ago

    FWIW, I’m not against using AI as an assistant for coding (I do it too, using Claude and Vercel as assistants) just as long as the code is reviewed and understood in full* by the dev before publishing. *my emphasis

    A very sane take. I do wish devs would fully disclose this on their github or other. That way, if the project is seasoned, well starred, et al, and the dev used AI as an assistant, then the user gets to decide. Given all the criteria are met, I would deploy it.

    I will say that I have observed what seems like a pretty decent up tick in selfhosted apps, and I would be willing to bet a goodly amount of them have at the very least, used AI in some capacity, if not most/all code. I don’t have any solid evidence to back that up but it just seems that way to me.

    • prettygorgeous@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 days ago

      I think the problem is a cyclical one. Some devs are afraid to admit that they used AI to help them code because there’s so much hatred towards using AI to code. But the hatred only grows because some devs are not disclosing that they’ve had help from AI to code and it seems like they’re hiding something which then builds distrust. And of course, that’s not helped by the influx of slop too where an AI has been used and the code has not been reviewed and understood before its released.

      I don’t mind more foss projects, even if they’re vibe coded, but please PLEASE understand your code IN FULL before releasing it, if at least so you can help troubleshoot the bugs people experience when they happen!

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Some devs are afraid to admit that they used AI to help them code because there’s so much hatred towards using AI to code.

        I would say there is a lot of truth to that statement. The backlash is immediate and punishing. I’ve said before, I think there are a lot of young devs who would like to contribute to the opensource/selfhosting community, but lack the experience.

      • november@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Some devs are afraid to admit that they used AI to help them code because there’s so much hatred towards using AI to code.

        Cowards. “Some devs” would not survive five minutes in the real world as a queer person.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 days ago

      Honestly, any developer that isn’t using an LLM as an assistant these days is an idiot and should be fired/shunned as such; it’s got all the rational sense of “I refuse to use compilers and I hand-write my assembly code in vi.”

      (And I speak as someone who has a .emacs file that’s older than most programmers alive today and finally admitted I should stop using csh as my default shell this year.)

      Here’s the disclosure you need: all projects you see have involved AI somewhere, whether the developers like to admit it or not. End of. The genie is out of the bottle, and it’s not going back in. Railing against it really isn’t going to change anything.

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Here’s the disclosure you need: all projects you see have involved AI somewhere, whether the developers like to admit it or not. End of. The genie is out of the bottle, and it’s not going back in. Railing against it really isn’t going to change anything.

        I’ve said it before, AI is here to stay. It’s not a fad. Kind of like when the internet first started to become publicly available. Lots of people deemed it a fad. It’s now a global phenom and it is the basis by which we do business on the daily, minute by minute, globally. I do think that AI needs some heavy governmental regulation. It would be great if we could all play nicely together without involving the government(s). Alas, we don’t seem to be able to do that, and so, government(s) has to step in, unfortunately. The problem with that is, imho, surveillance capitalism has worked so well that governments also want to take a peek at that data too. I have nothing to back up that conspiracy theory, it’s just a feeling I get.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      18 days ago

      Yeah. Maybe it’s time to adopt some new rule in the selfhosted community. Mandating disclosure. Because we got several AI coded projects in the last few days or weeks.

      I just want some say in what I install on my computer. And not be fooled by someone into using their software.

      I mean I know why people deliberately hide it, and say “I built …” when they didn’t. Because otherwise there’s an immediate shitstorm coming in. But deceiving people about the nature of the projects isn’t a proper solution either. And it doesn’t align well with the traditional core values of Free Software. I think a lot of value is lost if honesty (and transparency) isn’t held up anymore within our community.

      • Tim@lemmy.snowgoons.ro
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Warning, anecdote:

        I was unexpectedly stuck in Asia for the last month (because of the impact of the war), turning an in-person dev conference I was organising into an “in-person except for me” one at a few days notice.

        I needed a simple countdown timer/agenda display I could mix into the video with OBS; a simple requirement, so I tried a few from the standard package repos (apt, snap store, that kind of thing.)

        None of them worked the way I wanted or at all - one of them written in Python installed about 100 goddamned dependencies (because, Python,) and then crashed because, well, Python.

        So I gave up and asked my local hosted LLM model to write it for me in Rust. In less than 10 minutes I had exactly what I wanted, in a few hundred lines of Rust. And yeah, I did tidy it up and publish it to the snap store as well, because it’s neat and it might help someone else.

        Which is more secure? The couple of hundred lines of Rust written by my LLM, or the Python or node.js app that the developer pinky-promises was written entirely by human hand, and which downloads half the Internet as dependencies that I absolutely am not going to spend time auditing just to display a goddamned countdown clock in a terminal window?

        The solution to managing untrusted code isn’t asking developers for self-declared purity test results. It’s sandboxing, containers, static analysis… All the stuff that you are doing already with all the code/apps you download if you’re actually concerned. You are doing those things, right?