I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    It’s not the technology per se, it’s the political-economy of LLM training - it’s essentially colonial. They treat all content on the internet like something they can just take without compensation for the people who made it. They treat our electricity and water the same way, just a cheap resource that they can take. They treat their mechanical turks and trainers the same way, just cheap disposable labor that they can take.

    As for their goals, to essentially industrialize and proletarianize all intellectual labor, I think this will be the foundation of their own destruction. They’re kicking these professionals down the class ladder into the working class by deskilling their labor, and that only raises the contradictions.

    Shrinking the so-called “middle class” segment in the metropoles undermines the whole basis for colonialism in the first place.

  • NuclearDolphin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I haven’t seen any comments here that adequately address the core of the issue from a leftist perspective, so I will indulge.

    LLMs are fundamentally a tool to empower capital and stack the deck against workers. This is a structural problem, and as such, is one that community efforts like FOSS are ill-equipped to solve.

    Given that training LLMs from scratch requires massive computational power, you must control some means of production. i.e. You must own a server farm to scrape publicly accessible data or collect data from hosting user services. Then you must also own a server farm equipped with large arrays of GPUs or TPUs to carry out the training and most types of inference.

    So the proletariat cannot simply wield these tools for their own purposes, they must use the products that capital allows to be made available (i.e. proprietary services or pre-trained “open source” models)

    Then comes the fact that the core market for these “AI” products is not end users; it is capitalists. Capitalists who hope their investments will massively pay off by cutting labor costs on the most expensive portion of the proletariat: engineers, creatives, and analysts.

    Even if “AI” can never truly replace most of these workers, it can convince capitalists and their manager servants to lay off workers, and it can convince workers that their position is more precarious due to pressure from the threat of replacement, discouraging workers for fighting for increased pay and benefits and better working conditions.

    As is the case with all private private property, profits made by “AI” and LLM models will never reach the workers that built those models, nor the users who provided the training data. It will be repackaged as a product owned by capital and resold to the workers, either through subscription fees, token pricing, or through forfeiture of private data.

    Make no mistake, once the models are sufficiently advanced, tools sufficiently embedded into workflows, and the market sufficiently saturated, the rug will be pulled, and “enshittification” will begin. Forcing workers to pay exorbitant prices for these tools in a market where experience and skills are highly commoditized and increasingly difficult to acquire.

    The cherry on top is that “AI” is the ultimate capital. The promise of “AI” is that capitalists will be able to use the stolen surplus value from workers to eliminate the need for variable capital (i.e. workers) entirely. The end goal is to convert the whole of the proletariat into maximally unskilled labor, i.e. a commodity, so they can be maximally exploited, with the only recourse being a product they control the distribution of. AI was never going to be our savior, as it is built with the intent of being our enslaver.

  • 9tr6gyp3@lemmy.world
    link
    fedilink
    English
    arrow-up
    129
    ·
    12 days ago

    It steals from the copyright holders in order to make corporate AI money without giving back to the creators.

    It uses insane amounts of water and energy to function, with demand not being throttled by these companies.

    It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.

    And if you think about it, it doesn’t actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.

    • NuclearDolphin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      It steals from the copyright holders in order to make corporate AI money without giving back to the creators.

      This is a liberal criticism, not a leftist one. Leftists do not believe in private property, let alone expanding private property to include ideas as “intellectual property”

      A leftist framing of this critique is that one of the primary objectives of LLMs is to bring publicly accessible information, i.e. the commons, into the realm of private property, where one must exchange one’s personal data (read: power) to access what already exists in the commons.

      Namely, the scraping required for LLM training can be considered exploitation of workers for profit. Workers or volunteers created, wrote, organized, or edited the content. Content that is now used to generate a profit, profit that the workers never see a penny of. This content is repackaged and sold back to the workers.

  • BillDaCatt@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    edit-2
    12 days ago

    Can’t speak for anyone else, but here are a few reasons I avoid Ai:

    • AI server farms consume a stupid amount of energy. Computers need energy, I get it, but Ai’s need for energy is ridiculous.

    • Most of the implementations of Ai seem to be after little to no input from the people who will interact with it and often despite their objections.

    • The push for implementing Ai seems to be based on the idea that companies might be able to replace some of their workforce compounded with the fear of being left behind if they don’t do it now.

    • The primary goal of any Ai system seems to be about collecting information about end users and creating a detailed profile. This information can then be bought and sold without the consent of the person being profiled.

    • Right now, these systems are really bad at what they do. I am happy to wait until most of those bugs are worked out.

    To be clear, I absolutely want a robot assistant, but I do not want someone else to be in control of what it can or cannot do. If I am using it and giving it my trust, there cannot be any third parties trying to monetize that trust.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      12
      ·
      12 days ago

      Well I personally also avoid using AI. I just don’t trust the results and I think using it makes mentally lazy (besides the other bad things).

  • baggachipz@sh.itjust.works
    link
    fedilink
    arrow-up
    49
    ·
    12 days ago

    Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.

    It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.

  • alexc@lemmy.world
    link
    fedilink
    arrow-up
    35
    ·
    12 days ago

    I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs

    More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.

  • ninjabard@lemmy.world
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    12 days ago

    It’s generative and LLM AI that is the issue.

    It makes garbage facsimiles of human work and the only thing CEOs can see is spending less money so they can horde more of it. It also puts pressure on resource usage, like water and electricity. Either by using it for cooling the massive data centers or by simply the power draw needed to compute whatever prompt.

    The other main issue is that it is theft plain and simple. Artists, actors, voice actors, musicians, creators, etc are at risk of having their jobs stolen by a greedy company that only wants to pay for a thing once or not at all. You can get hired once to read or be photographed/videoed and then that data can be used to train a digital replacement without your consent. That was one of the driving forces behind the last big actor’s union protests.

    For me, it’s also the lack of critical thinking skills using things like ChatGPT fosters. The thought that one doesn’t have to put any effort into writing an email, an essay, or even researching something when you can simply type in a prompt and it spits out mainly incorrect information. Even simple information. I had an AI summary tell me that 440Hz was a higher pitch than 446Hz. I wasn’t even searching for that information. So, it wasted energy and my time giving demonstrably wrong data I had no need for.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      6
      ·
      12 days ago

      Thank you. Well, personally I do not use ChatGPT and this is one of the reasons why I asked humans this question :)

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    31
    ·
    12 days ago

    I’m against the massive, wasteful data centers that are destroying all climate targets and driving up water/electricity prices in communities. Their current trajectory is putting us on a collision course with civilization collapse.

    If the slop could be generated without these negative externalities I don’t know if I’d be against it. China has actually made huge strides in reducing the power and water footprint of training and usage, so there’s maybe some hope that the slop machines won’t destroy the world. I’m not optimistic, though.

    This seems like a dead-end technology.

  • matelt@feddit.uk
    link
    fedilink
    arrow-up
    26
    ·
    12 days ago

    Personally I think the environmental impact and the sycophantic responses that take away the need for one to exercise their brain are my 2 biggest gripes.

    It was a fun novelty at first, I remember my first question to chat gpt was ‘how to make hamster ice cream’ and I was genuinely surprised that it gave me some frozen fruit recipe along with a plea to not harm hamsters by turning them into ice cream.

    Then it got out of hand very quickly, it got added onto absolutely everything, despite the hallucinations and false facts. The intellectual property issue is also of concern.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    12 days ago

    It’s bad for the environment, now uses half of all energy produced globally in just a few years.

    It’s bad for society, automating labor without guaranteeing human needs is really really fucked up and basically kills unlucky people for no good reason.

    It’s bad for productivity, it is confidently wrong just as often as it is right, the quality of the work is always sub par, it always requires a real person to baby it.

    It’s bad for human development. We created a machine we can ask anything so we never have to think, but the machine is dumber than anyone using it so it just makes us all brain dead.

    It’s complete and not getting better. The tech can not get better than it is now unless we create a totally different algorithmic approach and start from scratch again.

    It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.

    • 33550336@lemmy.worldOPM
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      It’s an artificial hype bubble that distracts us from real solutions to real problems in the world.

      Yes, this should be noticed or even emphasized.

  • morphballganon@mtgzone.com
    link
    fedilink
    English
    arrow-up
    24
    ·
    12 days ago

    Modern LLMs, incorrectly labeled as “AI,” are just the modern version of spell-check.

    You know how often people create totally embarrassing mistakes and blame spell-check?

    “AI” is another one of those.

    And it also requires tons of water that could be going to people’s homes.

  • DragonTypeWyvern@midwest.social
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    11 days ago

    If tech billionaires were talking about how this will reduce your work week, enable Basic Universal Income, all while increasing production it would be one thing.

    Are they doing that?

    Or are they increasing the laying off of workers, increasing the work week for the remainder, reducing pay, and doing everything they can to create an inescapable surveillance state?