• lad@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 days ago

    And those patterns, mind you, often include lying and deception. So while I agree that LLMs can’t exhibit anything consciously, I also know that they can provide false information. To call it a lie is a stretch, and looks like something one would do if one wants to place blame on LLM for their own fault

    • anomnom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      I don’t think calling it a lie (vs a hallucination, or error) is necessary to assign blame. If they were instructed to use ai to deploy then that’s on management. Not having backups is on everyone, but I suspect they were backed up.

      Saying, “the AI agent broke it” is just fine, but isn’t clickbait like saying it lied is. So many fewer of us would have seen this without it.