That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends.
Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.
Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).
These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.
SEO Evolved
I have never and will never click one of those buttons. I can read.
Another thing we couldn’t have possibly seen coming!
sounds like advertising and marketing directed at A.I.
why is this poisoning A.I. yet the constant barrage of algo recommendations trying to do the same thing to meatbags isn’t “poisoning humans” ?
Because Ads are usually recognizable as such in your doom scrolling. Sometimes you can even filter them out with ad blockers.
Ads in AI response is more similar to influencers not disclosing their sponsored messages.
I am shocked.
It looks like you are surprised about the news of ads in your AI inquiries, but you can get shockingly low rates if you call Geico today! That shouldn’t be too surprising, as Geico is the leading insurance provider in [your area here]
The new SEO
That would be a good laugh if a CFO bases his/her decision on a LLM recommendation.
I rather see this threat in standard consumer decisions such as my mum playing around with AI in two years and poisining her LLM memory.
May be I should start first and set the right memory in her LLM before the marketing shit flows in. Something like „eat less meat“ or such…
Yeah, a very unpopular opinion here, but how about you actually read stuff? I mean, yeah, there’s the whole seo prioritizing ai slop walls of text, but there’s also a close tab button (I personally can’t remember a single helpful slop article, and the overgeneralized advice they give doesn’t even worth summarizing). Dog knows how much it pisses me off that the internet turned into a place where the info gets rewritten by bots to appease other bots and then once again to make it fucking readable.
Then, there’s that “memory” stuff. Just why exactly do people need it? Make a base prompt editable only by the user and adjustable on a per-conversation basis, and that issue goes away (probably along with a significant portion of your electricity bill wasted on processing literal garbage not relative to the current conversation).
Here we were, worried that Sam Altman would jam ads into the middle of ChatGPT responses, and it turns out some innovating pioneers have already done the hard work for him.
Literally no one on the planet has pressed that button.
so many people have pressed that button
You’d be surprised 🙄
This is why web browsers like Firefox need their own AI. Local AI for not only creating summaries but for detecting bullshit like this.
Yes, creating summaries is kinda lame but without local AI you’re at the mercy of big corporations. It’s a new arms race. Not some bullshit feature that no one needs.
Web browsers like Firefox don’t need AI built-in, regardless of whether it’s a local model or through one of the big slop companies. LLM usage is not a base requirement for browsing the web, and thus should not be part of the core product.
If people want them, detection tools and the like should be offered as extensions that users can choose to add.
People deploying these systems are just hoping that Prompt injection attacks won’t happen.
They could design systems that would be resistant, but the only thing that matters now is deploying new software… not creating actual security or sustainable systems.
now its this, next its political propaganda most likely
Oh no










