(Just to make sure we’re on the same page, the first article describes deception as ‘the systematic inducement of false beliefs in the pursuit of some outcome other than the truth’.)
Are you saying that AI bots do not do this behavior? Why is that?
(P.S. I am not saying this story is necessarily real, I am just want to know your reasoning)
No, because LLMs do not have agency and can’t “pursue” anything, nor do they have any ability to evaluate truth. They reproduce patterns that have been presented to them through training data.
And those patterns, mind you, often include lying and deception. So while I agree that LLMs can’t exhibit anything consciously, I also know that they can provide false information. To call it a lie is a stretch, and looks like something one would do if one wants to place blame on LLM for their own fault
I don’t think calling it a lie (vs a hallucination, or error) is necessary to assign blame. If they were instructed to use ai to deploy then that’s on management. Not having backups is on everyone, but I suspect they were backed up.
Saying, “the AI agent broke it” is just fine, but isn’t clickbait like saying it lied is. So many fewer of us would have seen this without it.
i think this is a symantics issue. yes using ‘lie’ is a bit of short hand/personifying a process. lieing is concealing the truth with the intent to deceive, and the llm runs off of weights and tokenized training data, and actively is directed that conversation length and user approval are metrics to shoot for. Applying falsehoods are the most efficient way to do that.
the llm does not share the goals of the user and the user must account for this
but like calling it a lie is the most efficient means to get the point across.
but like calling it a lie is the most efficient means to get the point across.
It very much doesn’t because it enforces the idea that these algorithms know anything a or plan for anything. It is entirely inefficient to treat an llm like a person, as the clown in the screenshots demonstrated.
it depends on the topic really. it is a lie in that it is a told false hood. by reasonable people talking about the unreliability of LLM’s it is sufficient without dragging the conversation away from the topic. if the conversation starts to surround the ‘feelings’ of the ‘AI’ then it’s maybe helpful point it out. otherwise it’s needlessly combative and distracting
No, it doesn’t. Would you say a calculator “lied” to you if it output an incorrect answer? Is your watch “lying” to you when it’s out of sync? No, obviously not. They’re just wrong, not “telling falsehoods”.
by that logic, what does arguing about the semantics of a word choice where the initial idea by the post was obviously understood, else we would not be talking about it?
seems off topic like i warned about, and a waste of time
A lie is defined as an intentionally false statement. LLMs can be given instruction sets that lead to them providing intentionally false information. This would be the LLM telling a falsehood because it was instructed to do so. They can lie, it has been documented and studied. You’re arguing against something that’s already been figured out, what are you doing?
You speak with such confidence and insult others but you don’t seem open to others opinions at all, or even 10 seconds of googling.
Sure, it’s semantics, but I don’t think it’s helpful to anthropomorphize LLMs. Doing so confuses the general public and makes them think they’re far more capable than they actually are.
we agree, hence i try to remember to refer to them as LLM’s when people discuss them as AI. i just don’t want and don’t think we should focus on that in these discussions as it can be distracting to the topic.
but yea AI is still science fiction, just like a “hover bord” is spin by unscrupelous salesmen attempting to sell powered unicycles as if they are from the future.
Correct. Because there is no “pursuit of untruth”. There is no pursuit, period. It’s putting words together that statistically match up based on the input it receives. The output can be wrong, but it’s not ever “lying”, even if the words it puts together resemble that.
I’m not the guy you’re replying to, but I wanted to post this passage from the article about their definition:
It is difficult to talk about deception in AI systems without psychologizing them. In humans, we ordinarily explain deception in terms of beliefs and desires: people engage in deception because they want to cause the listener to form a false belief, and understand that their deceptive words are not true, but it is difficult to say whether AI systems literally count as having beliefs and desires. For this reason, our definition does not require this.
Their “definition” is wrong. They don’t get to redefine words to support their vague (and also wrong) suggestion that llms “might” have consciousness. It’s not “difficult to say” - they don’t, plain and simple.
(Just to make sure we’re on the same page, the first article describes deception as ‘the systematic inducement of false beliefs in the pursuit of some outcome other than the truth’.)
Are you saying that AI bots do not do this behavior? Why is that?
(P.S. I am not saying this story is necessarily real, I am just want to know your reasoning)
No, because LLMs do not have agency and can’t “pursue” anything, nor do they have any ability to evaluate truth. They reproduce patterns that have been presented to them through training data.
And those patterns, mind you, often include lying and deception. So while I agree that LLMs can’t exhibit anything consciously, I also know that they can provide false information. To call it a lie is a stretch, and looks like something one would do if one wants to place blame on LLM for their own fault
I don’t think calling it a lie (vs a hallucination, or error) is necessary to assign blame. If they were instructed to use ai to deploy then that’s on management. Not having backups is on everyone, but I suspect they were backed up.
Saying, “the AI agent broke it” is just fine, but isn’t clickbait like saying it lied is. So many fewer of us would have seen this without it.
i think this is a symantics issue. yes using ‘lie’ is a bit of short hand/personifying a process. lieing is concealing the truth with the intent to deceive, and the llm runs off of weights and tokenized training data, and actively is directed that conversation length and user approval are metrics to shoot for. Applying falsehoods are the most efficient way to do that.
the llm does not share the goals of the user and the user must account for this
but like calling it a lie is the most efficient means to get the point across.
It very much doesn’t because it enforces the idea that these algorithms know anything a or plan for anything. It is entirely inefficient to treat an llm like a person, as the clown in the screenshots demonstrated.
Some people really can’t debate a topic without constantly insulting the person they disagree with…
it depends on the topic really. it is a lie in that it is a told false hood. by reasonable people talking about the unreliability of LLM’s it is sufficient without dragging the conversation away from the topic. if the conversation starts to surround the ‘feelings’ of the ‘AI’ then it’s maybe helpful point it out. otherwise it’s needlessly combative and distracting
No, it doesn’t. Would you say a calculator “lied” to you if it output an incorrect answer? Is your watch “lying” to you when it’s out of sync? No, obviously not. They’re just wrong, not “telling falsehoods”.
yes if the calculator incorrectly provided an answer, and i was having a casual conversation over it.
such as with over simplified rounding and truncation errors that some calculators give.
What is casual about the situation in the screenshots? You keep bringing that up as if it changes anything.
by that logic, what does arguing about the semantics of a word choice where the initial idea by the post was obviously understood, else we would not be talking about it?
seems off topic like i warned about, and a waste of time
A lie is defined as an intentionally false statement. LLMs can be given instruction sets that lead to them providing intentionally false information. This would be the LLM telling a falsehood because it was instructed to do so. They can lie, it has been documented and studied. You’re arguing against something that’s already been figured out, what are you doing?
You speak with such confidence and insult others but you don’t seem open to others opinions at all, or even 10 seconds of googling.
Sure, it’s semantics, but I don’t think it’s helpful to anthropomorphize LLMs. Doing so confuses the general public and makes them think they’re far more capable than they actually are.
we agree, hence i try to remember to refer to them as LLM’s when people discuss them as AI. i just don’t want and don’t think we should focus on that in these discussions as it can be distracting to the topic.
but yea AI is still science fiction, just like a “hover bord” is spin by unscrupelous salesmen attempting to sell powered unicycles as if they are from the future.
Correct. Because there is no “pursuit of untruth”. There is no pursuit, period. It’s putting words together that statistically match up based on the input it receives. The output can be wrong, but it’s not ever “lying”, even if the words it puts together resemble that.
I’m not the guy you’re replying to, but I wanted to post this passage from the article about their definition:
Their “definition” is wrong. They don’t get to redefine words to support their vague (and also wrong) suggestion that llms “might” have consciousness. It’s not “difficult to say” - they don’t, plain and simple.