• f314@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 days ago

    I’m not the guy you’re replying to, but I wanted to post this passage from the article about their definition:

    It is difficult to talk about deception in AI systems without psychologizing them. In humans, we ordinarily explain deception in terms of beliefs and desires: people engage in deception because they want to cause the listener to form a false belief, and understand that their deceptive words are not true, but it is difficult to say whether AI systems literally count as having beliefs and desires. For this reason, our definition does not require this.

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      15 days ago

      Their “definition” is wrong. They don’t get to redefine words to support their vague (and also wrong) suggestion that llms “might” have consciousness. It’s not “difficult to say” - they don’t, plain and simple.