- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
As much as I agree with this poll, duck duck go is a very self selecting audience. The number doesn’t actually mean much statistically.
If the general public knew that “AI” is much closer to predictive text than intelligence they might be more wary of it
There was no implication that this was a general poll designed to demonstrate the general public’s attitudes. I’m not sure why you mentioned this.
Because that’s how most people implicitly frame headlines like this one: a generalization of the public
I mean you Gotta Hand it to “Ai” - it is very sophisticated, and Ressource intensive predicitive Text.
The poll didn’t even ask a real question. “Yes AI or no AI?” No context.
I would like to petition to rename AI to
Simulated
Human
Intelligence
Technologyor: computer rendered anonymized plagiarism
It’s funny but it still lies about intelligence
‘Intelligence’ is a measure, not an absolute. Human intelligence can range anywhere from genius to troglodyte. But you’re right, still not human, still at very best simulated, and isn’t capable of reason, just the illusion of reason.
On public transit, it bottoms out.
I would like to petition to rename AI to
Fucking stupid and useless
FSAU? That’s not a word.
Tell me you don’t know shit about LLMs without telling me so:
Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.
Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.
Seriously, titles in subs like this need better moderation.
The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.
Unless I’m mistaken this title is generated to match the title at the link. Are you saying the mods should update titles to accurately reflect the content of the articles posted?
Also, Duck Duck Go is a search engine. What other ai would it do?
It has a separate llm chat interface, and you can disable the ai summary that comes up on web search results.
That is the title from the news article. It might not be how good journalism would work, but copying the title of the source is pretty standard in most news aggregator communities.
Well, that’s how journalism has always functioned. People call it “clickbait” as if it’s something new, but headlines have always been designed to grab your attention and get you to read.
As a DuckDuckGo user who uses claude and ChatGPT every day, I don’t want AI features in duck duck go because I probably would never use them. So many companies are adding chatbot features and most of them can’t compete with the big names. Why would I use a bunch of worse LLMs and learn a bunch of new interfaces when I can just use the ones I’m already comfortable with
I guess they haven’t asked me or it’d be 91%
You’ve been learning statistics from an LLM, haven’t you?
I’m a proud graduate from the Terrence Howard school of math.
Had to look him up. Gold.
1 * 1 = 2, because reasons
This guy knows the SHIT out of statistics!
deleted by creator
I have found a couple times an LLM being good for searches. My example is scooters in the US, because of commute I’d like to find an electric scooter, the kind like a Vespa. But you do a search, you find all sorts about the standing scooters, look for electric motorcycles and they start getting to car level prices or aren’t street legal, it gets to be a mess. Chat GPT turned that search into a relatively quick one that I could find a local place that I’m going to check out when there’s not ice on the ground. But the important part was making it require links. As a tool, it can have its uses.
So all of that said… Google and other searches doing AI did fuck all and nothing to help on this, and I agree 100% I do not want AI on the search for DDG
The other 10% are bots
I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.
I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.
I would have no problem with AI if it could be useful.
The problem is no matter how many times I’m promised otherwise it cannot automate my job and talk to the idiots for me. It just hallucinates a random gibberish which is obviously unhealthful.
It’s really good at answering customer questions for me, to be honest.
But, I still have to okay it. Just in case. There’s no trust.
However that still does take a lot less bandwidth for me because I’m not good at the customer facing aspects of my business.
I still would, as the increased productivity, once again, does not lead to reduced hours. Always more productive, always locked into a bullshit schedule.
Prompt or model issue my dude
Or you’re one of the few who have a pretty niche job
Just things like different words or vocabulary, or helping with some code related knowledge, Linux issues… or even random known knowledge that you happen not to know
I think LLMs are also just genuinely not as universally useful as expected. Everyone thinks it can automate every job except their own not because everyone thinks their job is special but because they know all the intricate parts of that job that LLMs are still really bad at.
For instance AI could totally do my job at a surface level but it quickly devolves into deal breaking caveats which I am lumping into very broad categories to save time:
- Output would look good (great even) but not actually be useable in most applications
- Output cannot even begin to be optimized in the same way humans can optimize it
- It would take more effort for to fix these than to just do the whole thing on my own to begin with
Case in point for me. AI cannot write documentation for new technical procedures because by definition they are new procedures. The information is not in its knowledge base, because I haven’t written it yet.
helping with some code related knowledge, Linux issues
I’m sorry but why the absolute flaming fuck does everyone assume that programming is the only job in the universe?
There are entire industries that don’t revolve around Linux. It’s amazing but not everyone in the world has to care about programming. Everyone needs to stop telling me that AI is good at programming. Firstly because it isn’t, it’s good damn awful. Secondly because I’m not a programmer, so I don’t care.
I’m sorry but why the absolute flaming fuck does everyone assume that programming is the only job in the universe?
I’m just giving examples I relate to, and describe my use of it
Firstly because it isn’t, it’s good damn awful. Secondly because I’m not a programmer, so I don’t care.
That explains why you think it’s awful at it then. You just believe the haters because you have confirmation bias. Fact is companies and people wouldn’t use it to code if it wasn’t good at that
It’s perfect for summaries, known general knowledge, correcting text, roleplay, repetitive tasks and some logic problems…
You just believe the haters because you have confirmation bias.
Well if by that you mean I’ve used it and it’s crap, then yeah, I’ve confirmed it.
I’ve found it useful for a few things. I had had a song intermittently stuck in my head for a few years and had unsuccessfully googled it a few times. Couldn’t remember artist, name, lyrics (it was in a language I don’t speak) - and chatGPT got it in a couple of tries. Things that I’m too vague about to be able to construct a search prompt and want to explore. Stuff like that. I just don’t trust it with anything that I want actual facts for.
Yep, preparing a proper search is my use case. Like “how is this special screw called?”. I can describe the screw and tell the model to provide me a list of how that screw could be called.
Then I can search for the terms of that list and one of the terms is the correct one. It’s way better than hoping that somebody described the screw in the same words in some obscure forum.
But, is it worth to burn the planet, make RAM, GPUs, hard drives unaffordable for everybody and probably crash the world economy for a better screw search? I doubt it.
The thing is Google was already pretty good at that before things like chat GPT came out.
I remember many years ago I searched for “that movie where they steal a painting by changing the temperature” and Google got it on the first hit. That was way before AI.
AI is not impressive or worth all the trade offs and worse quality of life. It is decent in some areas but mostly grifter tech.
It IS an impressive tech
An impressive grift
It really isn’t. Gen AI is trash. AI art and music are slop. The only place is is decent is with data and some coding basics and vibe coding leads to security issues.
And roleplay, and correcting text, and finding ideas, and explaining school stuff, and finding potential bugs…
Strongly disagree with you
Most objective article (sarcasm)
In fact it has a whole-ass “AI” chatbot product, Duck.ai, which is bundled in with DuckDuckGo’s privacy VPN for $10 a month
Oh, I meet you again. I haven’t logged in for a long time 😅
Hi :)
I’ve been much more inactive than before as well
I lost my SimpleX profile haha But to be honest I don’t like too much it. IRC, XMPP, DeltaChat are better options for me. They’re more popular and good for privacy also, but of course SimpleX is more focused on privacy :)
Sure, use what you want :)
And at least they’re not owned by a transphobic moron, but that’s how things are
I don’t use it much either, but mainly cuz there’s no one to speak to
Configure your bouncer and hop on IRC! We can meet on libera.chat I’m on the same boat, I also don’t have anyone to speak too. But on IRC you have a lot of channels available where you can join and talk with the others.
What’s wrong about this?
Nothing if disclosed and you’re okay with it. Else, opinions and partiality get in the way of truth and facts
Even that I would consider wildly unjust. User data would HAVE to be opt IN.
Don’t build AI into everything and assume you know how your users want to use it. If they do want to use AI, give me an MCP server to interact with your service instead and let users build out their own tooling.
On DuckDuckGo that is unsurprising
I see the shit that people send out, obvious LLM crap, and wonder how poor they must write to consider the LLM output worth something. And then wonder if the people consuming this LLM crap are OK with baseline mediocrity at best. And that’s not even getting into the ethical issues of using it.












