A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

    • Honytawk@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      8 days ago

      It is only hidden because too many non-programmers become hysterical whenever AI is mentioned.

      In a rational world, it wouldn’t need to be hidden.

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        7 days ago

        This ten times. It’s why the online discourse around AI is often so one sided. Anyone walking into a room where people are all nodding along to the same shallow, unnuanced statements, and throwing stones at anyone that points that out or tries to share their own contradictory to the group’s experience, even doing so in complete good faith, isn’t going to engage for long. And so that discussion is never going to turn nuanced since all the people interested in that have been ousted.

        And it sucks, because there are real harms in AI that must be guarded from for which we need widespread support. But the hostility and closed minded discussions just causes people to tune out and contrarily be more open minded towards AI as a response to the closed mindedness.

    • super_user_do@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      12
      ·
      8 days ago

      It is unreliable if unsupervised of course.  Microsoft and all those big corpos are vibecoding the whole thing, that is the reason why AI has gotten a bad reputation in the community despite it being objectively useful. Using AI to code ≠ Vibecoding