The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)
I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.
And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.
I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.
The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don’t be treating it like it’s some sort of irrational bandwagon.
For those who know
Wow i’m sure this comment section is full of respectful and constructive discussion /s. Lemme go pop some popcorn.
Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.
I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.
AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.
The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.
Making a cheap or efficient AI doesn’t help the end user in any way.
It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.
It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.
i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.
You and OP are misunderstanding what is meant by good and cheap.
It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.
So the quality of the content created is not limited by cost.
If the AI bubble popped, this won’t improve AI quality.
I’m using “good” in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.
What I mean by “good AI” is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven’t thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).
The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won’t be a clear profit motive, and they won’t be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.
Reminder that Lemmy is typically older and older people are usually more conservative about things. Sure, politcially, Lemmy leans left, but technologically, Lemmy is very conservative.
Like for example, you see people on Lemmy say they’ll switch to a dumbphone, but that’s probably even more insecure, and they could’ve just used Lineage OS or something and it would be far more private.
Why does being progressive and into tech mean being into ai all of a sudden? It has never meant that, its the conservative mfs pushing ai for a reason. You think any sort of powerful ai is about to be open source and usable by the ppl? Not expensive af to run with hella regulations behind who can use it?
Im progressive and in to tech, I dont like fking generative ai tho its the worst part of tech to me, ai can be great in the medical field, It can be great as a supplementary tool but mfs dont use it that way. They just wanna sit on their asses and get rich off other ppls work
You think any sort of powerful ai is about to be open source and usable by the ppl?
There’s a huge open source community running local models!
Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.
Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.
For labor people don’t like doing, sure. I can’t imagine replacing a friend of mine with a conversation machine that performs the same or better, though.
Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.
Lots of pro AI astroturfing and whataboutisms in these comments… 🤢
“Anyone I disagree with must be a bot or government agent”
“B-But you don’t understand, AI DESTROYS le epic self employed artists and their bottom line! Art is a sacred thing that we all do for fun and not for work, therefore AI that automates the process is EVIL!”
- Actual thought process of some people
AI does do this to a subsection. Claiming that everyone is overreacting is just as stupid and lacks the same amount of nuance as claiming AI is going to ruin all self employed artists.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI. As an artist I rather not have some randoms steal my stuff so some mid-tier corporation can generate their logos and advertisements without paying for it
Not claiming that everyone is overreacting, but how stupid a lot of anti-AI arguments are. Artists drawing art for a living gets painted not as a job, but as some sort of a fun recreational activity ignoring that artists have to do commissions or draw whatever’s popular with their fan base that pays their bills via patreon, which in other words is the process of commodifying oneself aka work.
Also this ignores AI companies stealing blatnatly copyrighted material to feed their AI.
Not saying that you’re necessarily one of those people, but this argument often pops up in leftist spaces who previously were anti-IP, which is a bit hypocritical. One moment people are against intellectual property, calling it abusable, restrictive, etc, but once small artists start getting attacked then the concept has to be defended.
As an artist
womp womp you’ll have to get a real job now /s
Um, have you tried practicing? Just draw a stick figure or hire an artist, this will easily solve all of your problems. You’re welcome.
The problem isn’t AI. The problem is Capitalism.
The problem is always Capitalism.
AI, Climate Change, rising fascism, all our problems are because of capitalism.
Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart… So alas we will always be in complete shit until we disappear.
That’s a pathetic, defeatist world view. Yeah, we’re victims of our circumstances, but we can make the world a better place than what we were raised in.
You can try, and you should try. But some handful of generations ago, some assholes were in the right place at the right time and struck it rich. The ones that figured out generational wealth ended up with a disproportionate amount of power. The formula to use money to make more money was handed down, coddled, and protected to keep the rich and powerful in power. Even 100 Luigi’s wouldn’t even make the tiniest dent in the oligarch pyramid as others will just swoop in and consume their part.
Any lifelong pursuit you have to make the world a better place than you were raised in will be wiped out with a scribble of black Sharpie on Ministry of Truth letterhead.
Well, you can believe that there is a chance, but there is none. It can only be created with sweat and blood. There are no easy ways, you know, and sometimes there are none at all, and sometimes even creating one seems like a miracle.
Wrong.
The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.
While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.
I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.
And the point of anarchist or actual communist systems is that such scale would be miniscule. Not massive national or unanswerable state scales.
And yes, I’m an anarchist. I know DB0 and their instance and generally agree with their stance - because it would allow any one of us to effectively advocate against it if we desired to.
There would be no tech broligarchy forcing things on anyone. They’d likely all be hanged long ago. And no one would miss them as they provide nothing of real value anyway.
And the point of anarchist or actual communist systems is that such scale would be miniscule.
Every community running their own AI would be even more wasteful than corporate centralization. It doesn’t matter what the system is if people want it.
The point is, most wouldn’t. It’s of little real use currently, especially the LLM bullshit. The communities would have infinitely better things to pit resources to.
DB0 has a rather famous record of banning users who do not agree with AI. See [email protected] or others for many threads complaining about it.
You have no way of knowing what the scale would be as it’s all a thought experiment, however, so let’s play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?
Buuuuut… keep in mind that in previous Communist regimes (even if you disagree that they were “real” Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?
Yes, I have seen posts on it. Sufficed to say, despite being an anarchist. I don’t have an account there for reasons. And don’t agree with everything they do.
The situation with those bans I might consider heavy handed and perhaps overreaching. But by the same token it’s a bit of a reflection of some of those that are banned. Overzealous and lacking nuance etc.
The funny thing is. They pretty much dislike the tech bros as much as anyone here does. You generally won’t ever find them defending their actions. They want AI etc that they can run from their home. Not snarfing up massive public resources, massively contributing to climate change, or stealing anyone’s livelihood. Hell many of them want to run off the grid from wind and solar. But, as always happens with the left. We can agree with eachother 90%, but will never tolerate or understand because of the 10%.
PS
We do know the scale. Your use of “the state” with reference to anarchism. Implies you’re unfamiliar with it. Anarchism and communism are against “the state” for the reasons you’re also warry of it. It’s too powerful and unanswerable.
It will happen regardless because we are not machines, we don’t follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.
I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.
Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.
I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.
I see, so you don’t understand. Or simply refuse to engage with what was asked.
Can, would… and did. The list of environmental disasters in the Soviet is long and intense.
AI is bad and people who use it should feel bad.
So cancer cell detection is now bad and those doing it should feel bad?
The world isn’t black’n white.
Me to burn victims: “You know, without fire, we couldn’t grill meat. Right? You should think more about what you say.”
Don’t be obtuse, you walnut. I’m obviously not equating medical technology with 12-fingered anime girls and plagiarism.
When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.
AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it’s just math, and the equations have been in use for almost as long as we’ve had computers. It’s no more good or bad than calculus or trigonometry.
No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.
It’s funny watching you AI bros climb over each other to be the first with a what about-ism.
See I get the point of people hating what they call ‘AI’ here, I totally get it but I can’t see people using wrong terms since I know the correct one. The big corpos already misuse the term saying everything they made AI without specifying what kind of AI it is and people here that I assume techie also went to the wrong path (so you guys sounds the same as those evils, and u fell on the marketing). It’s not about whataboutism — it’s fixing what people always normalize using wrong terms when talking about technical stuff. I don’t care if you still don’t get it tho, I do what I can for saying the truth. And I don’t think you do know what ‘whataboutism’ really is.
So is eating meat, flying, gaming, going om holiday, basically if you exist you should feel bad
How does one feel bar?
You mean a subset of LLM that are trained on bad human behaviours
Would love an explanation on how I’m in the wrong on reducing my work week from 40 hours to 15 using AI.
Existing in predatory capitalistic system and putting the blame on those who utilize available tools to reduce the predatory nature of our system is insane.
Because when your employer catches on, they’ll bring you back up to 40 anyway.
And probably because those 15 hours now produce shit quality.
Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.
Nuance is the thing.
Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.
The truth is, like always, somewhere in between.
Why is the other arrow also pointing up?
Because I used AI slop to create this shitpost lol. So naturally it would make mistake.
There are other mistakes in the image too
You used AI to make a stickfigure comic? Damn.
I mostly used it for irony, this is a shitpost after all and to make the orange arrow blue. But it messed some other things up along the way. Happy accidents
You know, I was really hoping people would just use the existing tools rather than AI. You used AI instead of the fucking paint bucket tool in ANY photo/drawing tool. Unbelievable
True. Now shut up and take my upvote! Jo need for arguments; all has already been said.