garbage account

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    It’s a masterclass in externalities: local communities face the consequences of the resource consumption by the data centers.

    Job loss: were all told the “knowledge market” is where you deserve a good salary. AI threatens a lot of that work. Blue collar factory work will go as soon as AI can be properly integrated with nimble robotics that aren’t quite there yet. With how disgusting our society is (in the US) there will be no consideration for the people who can no longer find work.

    Wealth built on theft and gambling: people like Altman become fabulously wealthy with a system that makes 0 profit and has been built off of the stolen works of millions of people.

    Capacity to manipulate: we’ve had enough trouble with bad faith actors on the Internet with real people. Now we’re going to have an endless army of “intelligent” actors that will be weaponized against populations worldwide to secure the position of the ultra wealthy over all of our governments.

    I didn’t before much, but now I REALLY don’t have a positive outlook on our future because of this…

    • Lena@gregtech.eu
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I think the job loss part is a capitalism problem, not just an AI problem.

      If we automate work, the people should get the benefits of the automation, they shouldn’t have to be worried that they won’t have a job.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Yet AI in the LLMs sucks dick, costs us way too much in global warming, and provides very little concrete value.

      We need a revolution yesterday.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    AI is only looks good if you’re an outsider to the profession. The moment you’re even an amateur, you’ll see all of its faults. It’s just a plagiarizing machine with a built-in contextual search function (any AI model that runs as an actual contextual search instead of a wannabe assistant with a flattering personality?), that can make some crappy looking and weirdly specific clip art, stock music with funny-sounding gimmicks, and buggy code you’d better plagiarize from public domain licensed code from Github.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      It’s probably from a redditor who probably is white and male. Y’know, self-deprecating humor is pretty common among redditors just like it is here.

        • boonhet@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          This isn’t a Twitter screenshot, it’s a Facebook one. Note the globe icon, that means a public post on Facebook. He’s also a niche microcelebrity, so the verification kinda makes sense?

          It seems he’s stopped posting on Twitter after 2024, having 70k posts total - so he must’ve quit cold turkey. No blue check on his Twitter profile.

          There’s also this tweet from him in 2017. I do not think this man is a nazi, or even nazi-adjacent.

          • Duamerthrax@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            I never said anything about nazis.

            I only know blue check marks from twitter. There’s really not much difference between the two though.

            • boonhet@sopuli.xyz
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              …Why is a blue check mark on Twitter bad if not for the fact that paying for it supports a nazi platform? I’m not sure I follow your logic.

              The Meta one isn’t paid, it’s just something you’re given if they can verify you’re who you say you are.

              • Duamerthrax@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                Mark Zuckerberg isn’t that much different from Elon Musk as far as politics go.

                If you want to be verified, you should just have a personal website that you publicly direct people to. Using someone else’s social media website as your primary soap box has always been madness.

                • jdnewmil@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  4 months ago

                  So you would click accept on my self-signed https website? Want some land in Florida?

      • Electricd@lemmybefree.net
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        just feels wrong

        like it’s making stereotypes feel normal and creating xenophobia or something

        when done humorously, it’s fine, but here it just seems serious

        • boonhet@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          But it is done humorously, is that not an obvious joke?

          Maybe not the funniest joke ever, but definitely not something someone’s saying seriously.

  • Enzy@feddit.nu
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I don’t hate it, just how it’s being used.

    Then again, proper use of AI, if even achievable would most likely result in disaster in some way.

    The way “AI” is marketed today isn’t real AI, it’s just a lazy source copy-pasting bot made for our convenience.

  • VirtuePacket@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I don’t hate AI. However, I:

    • Am concerned about the labor displacement it may cause–though I am skeptical it will be as widespread as currently feared. I think many of the companies that have cut workers already will end up regretting it in the medium term.
    • Am convinced that the massive, circular investment in this technology has produced an economic bubble that will burst in the coming years. Because we have so little insight into private credit markets, we don’t know to what degree retail and commercial banks will be exposed, and thus can’t anticipate the potential damage to the broader economy.
    • Am fatigued (but unsurprised) that the US government is not considering thoughtful regulation that anticipates the disruption that AI is likely to cause.
    • Am cognizant of its current limitations.
    • Do not currently believe that AGI is imminent or even guaranteed. I think elites peddling this notion may be captured by highly motivated reasoning. In some cases, it seems like a bit of a belief system.
  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I’m still waiting for it to appear and then lets ask them how they like it. Its not like the garbage we have now is really AI.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I don’t hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.

  • drhodl@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I don’t hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.

  • finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I’ve seen it successfully perform exactly one task without causing more harm or crearing liability for the people using it:

    Misinformation campaigns.

    And thats exactly how the AI Companies are using to to grow exponentially, lying about its costs and its capabilities both.

    It’s weird that this is somehow an unpopular opinion these days but I don’t like being lied to.

    • AxExRx@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Ive been hearong the claim now occasionally For the last several years that we’ve moved into the ‘post truth’ age. AI has kind of cemented that for me.

  • Gorilladrums@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I don’t hate AI, LLMs are incredibly powerful tools that have an incredibly wide range of uses. The technology itself is something that’s very exciting and promising.

    What I do hate is how they’re being used by large corporations. A small handful of big tech companies (Google, Microsoft, Facebook, OpenAI, etc) decided to take this technology and pursue it in the greediest ways possible:

    1. They took open source code, built on top of it, and closed it off so they could sell it

    2. They scrapped all the data on the internet without consent and used it to train their models

    3. They made their models generate stuff based on copyrighted works without permission or giving credit, thus basically stealing the content

    4. But that wasn’t enough for them so they decided to train their models on every interaction you have with their LLM services, so all your private conversations are stored and recycled even if you don’t want that to happen

    5. They use the data from the conversations that you’ve had with the chatbots to build customer profiles about you that they sell to advertisers so they could send you hoards of personalized ads

    6. They started integrating their LLMs into their other products as much as they could so they could artificially increase their stock prices

    7. They aggressively campaign for other companies to buy and integrate their models so both parties could artificially increase their stock prices

    8. In order to meet their artificially induced demand, they’re sucking the life out of the electricity grid, which is screwing over everybody else

    9. They’re also taking over the hardware industry and killing off consumer electronics since its more profitable for manufacturers to sell to AI companies than to consumers

    10. They’re openly bribing, lobbying, and campaigning governments to give them grants, tax breaks, and keep regulations at a minimum so they could do whatever they want and have society pay for the privilege

    11. They’re using these LLMs to cut as many jobs as possible so they could penny pinch just a little more, hence the massive waves of recent layoffs recently. This is being done even if the LLM replacements perform far worse than humans.

    12. All of this is being done with zero regard to the environmental damage caused by them with their monstrous data centers and electricity consumption

    13. All of this is being done with zero regard to the harmful impacts caused to people and society. These LLMs frequently lie and spread misinformation, they feed into delusions and bad habits of mentally unwell people, and they’re causing great damage to schools since students could use these models to easily cheat and nothing can be done about it

    When you put all of this together, then it’s easy to understand why people hate AI. This is what people oppose, and rightfully so. These corporations created a massive bubble and put our economy at risk of a major recession, they’re destabilizing our infrastructure, destroying our environment, they’re corrupting our government, they’re forcing tens of thousands of people into dire financial situations by laying them off, they’re eroding our privacy and rights, and they’re harming our mental health… and for what? I’ll tell you, all of this is done so a few greedy billionaires could squeeze a few more dollars out of everything so they could buy their 5th yacht, 9th private jet, or 7th McMansion. Fuck them all.

    • bstix@feddit.dk
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I think it’s interesting, that they can steal all this stuff and yet be unable to figure out how to sell it.

      All the money, all the data, all the energy, all the computer power, all the political control. And yet, they can’t manage to sell a single dollar worth of their product.

      Of course it’ll be shittified by commericals in and out of the content, and of course that will lead to paid models, but it’s not going to be very profitable, because nobody _really _needs bad intelligence. “Oh, it costs something? No thanks then, we already have intelligence at home.”

      Yes yes, the users are the product, yes, but who then is buying that user data? Commercials and stuff yeah yeah, but at what point does any of this manifest itself as a single fucking sales transaction where a real person pays a company for a real product? Fucking never.

      The whole thing is worthless.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        they can’t manage to sell a single dollar worth of their product.

        Ohh don’t worry, that’s not how this works :)

        We’re still in the venture capital stage. The companies are circle-jerking, paying each other off with venture funds and stock splits. They don’t need to be making money at this point because they’re already getting everything they ask for.

        Those $50-$200 packages from all the big companies are just there to get people used to the idea. They’re making all their money on selling each other useless support chatbots and horrible phone systems claiming they can reduce their staff by half. Well, they could always reduce their staff by half, customers have had to deal with shitty wait times for years.

        You’ll pay for AI by the prices of your software rising. Those costs are absorbed and passed on to you as micro-transactions inside your actual subscriptions and payments.

        Once they managed to get the AI intertwined in every system out there, they’re free to collude as a market and raise prices slowly. AI will be the cost of software inflation and hardware shortages that make anyone with a datacenter or enterprise hardware manufacturing capacity very, very rich.

        It could even be that in the end, this isn’t a bubble, it’s just a grift and it never pops, but because so expensive that your average person can barely eat if they expect to use software tools for their work.

    • pulsewidth@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      When people say “I fucking hate AI”, 99% of the time they mean “I fucking hate AI™©®”. They don’t mean the technology behind it.

      To add to your good points, I’m a CS grad that studied neural networks and machine learning years back, and every time I read some idiot claiming something like “this scientific breakthrough has got scientists wondering if we’re on the cusp of creating a new species of superintelligence” or “90% of jobs will be obsolete in five years” it annoys me because its not real, and it’s always someone selling something. Today’s AI is the same tech they’ve been working on for 30+ years and incrementally building upon, but as Moore’s Law has marched on we now have storage pools and computing power to run very advanced models and networks. There is no magic breakthrough, just hype.

      The recent advancements are all driven by the $1500 billion spent on grabbing as many resources they could - all because some idiots convinced them it’s the next gold rush. What has that $1500 bil got us? Machines that can answer general questions correctly around 40% of the time, plagiarize art for memes, create shallow corporate content that nobody wants, and write some half-decent code cobbled together from StackOverflow and public GitHub repos.

      What a fucking waste of resources.

      What’s real is the social impacts, the educational impacts, the environmental impacts, the effect on artists and others who have had their work stolen for training, the useability of the Internet (search is fucked now), and what will be very real soon is the global recession/depression it causes as businesses realize more and more that it’s not worth the cost to implement or maintain (in all but very few scenarios).

      • devedeset@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I’m really split with it. I’m not a 10x “rockstar” <insert modern buzzword> programmer, but I’m a good programmer. I’ve always worked at small companies with small teams. I can figure out how to parse requirements, choose libraries/architecture/patterns, and develop apps that work.

        Using Copilot has sped my work up by a huge amount. I do have 10 YoE before Copilot existed. I can use it to help write good code much faster. It may not be perfect, but it wouldn’t have been perfect without it. The thing is I have enough experience to know when it is leading me down the wrong path, and that still happens pretty often. What it helps with is implementing common patterns, especially with common libraries. It basically automates the “google the library docs/stackoverflow and use code there as a starting point” aspect of programming. (edit: it also helps a lot with logging, writing tests, and rewriting existing code as long as it isn’t too whacky, and even then you really need to understand the existing code to avoid a mess of bugs)

        But yeah search is completely fucked now. I don’t know for sure but I would guess stackoverflow use is way down. It does feel like many people are being pigeonholed into using the LLM tools because they are the only things that sort of work. There’s also the vibe coding phenomenon where people without experience will just YOLO out pure tech debt, especially with the latest and greatest languages/libraries/etc where the LLMs don’t work very well because there isn’t enough data.

        • pulsewidth@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          I agree it’s great at writing and frame-working parts of code and selecting libraries - it definitely has value for coding. $1500 bil value though, I doubt.

          My main concern there lies in the next gen of programmers. The work that ChatGPT (and Claude etc) outputs requires some significant programming prior-experience to allow them to make sense of the output and adjust (or correct) it to suit their scope and requirements of the project - it will be much harder for junior devs to learn that skill with LLMs doing all the groundwork - essentially the same problem in wider education now with kids/teens just using LLMs to write their homework and essays. The consequences will be long term, and significant. In addition (for coding) it’s taking away the entry-level work that junior devs would usually do and then have cleaned up for prod by senior devs - and that’s not theory, the job market for junior programmers is dying already.

        • Metju@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          LLMs are an okay’ish tool if your code style is not veering from what 99% of the open-sourced codebase looks like. Use any fringe concept in a language (for example, treat errors as values in languages ridden with exceptions, use functional concepts in an OOP language) and you will have problems.

          Also, this crap tends to be an automated copy-paste. Which is especially bad when it skips on abstracting away a concept you would notice if you were to write the code yourself.

          Source: own experience 😄

          • devedeset@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            Totally agree. In my day to day work, I’m not dealing with anything groundbreaking. Everything I want/need to code has already been done.

            if you have a Copilot license and are using the newest Visual Studio, it enables the agentic capabilities by default. It will actually write the code into your files directly. I have not done that and will not do that. I want to see and understand what it is trying to do.

  • Sunflier@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I hate AI because it’s replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.

    We’ve replaced you with ai

    -CEO

    Ai is replacing most of the jobs, and there isn’t enough open positions to be filled by the now unemployed.

    -Ecconomists

    I need food stamps, medical care, housing assistantance, and unemployment.

    -Me

    No! Get a job you lazy welfare queen!

    -Politicians

    Where? There aren’t any.

    -Me

    Not my problem! Now, excuse me while I funnel more money to my donors.

    -The same politicians

    • finitebanjo@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      The good news is, while automation like robot arms is continuing to replace humans, the AI aspect of it has been catastrophic and early adopters are often seen expressing remorse and reverting changes.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I don’t hate AI (specifically LLMs and image diffusion thingy) as a technology. I don’t hate people who use AI (most of the time).

    I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless “luminaries” of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don’t give a damn about the problems they are causing.

    And that’s the great tragedy, really: It’s a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that’s going to end disastrously. It’s going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it’ll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!

    The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it’s painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they’d lock up everything as proprietary trade secrets if they could have their way.

    As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that’s making everyone less productive.

    As an artist and a writer I find it incredibly sad that genAI didn’t hit the brakes a few years ago. I’ve been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it’s doing a terrible job at it. Tech bros are half-assedly marketing it as a “tool” for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)

    • jdnewmil@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      The value in LLMs is in the training and the data quality… so it is easy to publish the code and charge for access to the data (DaaS).

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I’m hopeful that when the bubble pops it’ll be more like the dot com crash, which is to say that the fallout is mostly of the economic variety rather than the superfund variety. Sure, that’ll still suck in the short term. But it will ideally lead to the big players and VC firms backing away and leaving behind an oversupply of infrastructure and talent that can be soaked up at fire sale prices by the smaller, more responsible companies that are willing to stick out the downturn and do the unglamorous work of developing this technology into something that’s actually sustainable and beneficial to society.

      That’s my naive hope. I do recognize that there’s an unfortunately high probability that things won’t go that way.