• Ech@lemmy.ca
    link
    fedilink
    English
    arrow-up
    225
    ·
    6 days ago

    Hey dumbass (not OP), it didn’t “lie” or “hide it”. It doesn’t have a mind, let alone the capability of choosing to mislead someone. Stop personifying this shit and maybe you won’t trust it to manage crucial infrastructure like that and then suffer the entirely predictable consequences.

        • moosetwin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          6 days ago

          (Just to make sure we’re on the same page, the first article describes deception as ‘the systematic inducement of false beliefs in the pursuit of some outcome other than the truth’.)

          Are you saying that AI bots do not do this behavior? Why is that?

          (P.S. I am not saying this story is necessarily real, I am just want to know your reasoning)

          • Cornelius_Wangenheim@lemmy.world
            link
            fedilink
            English
            arrow-up
            28
            ·
            6 days ago

            No, because LLMs do not have agency and can’t “pursue” anything, nor do they have any ability to evaluate truth. They reproduce patterns that have been presented to them through training data.

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              14
              ·
              6 days ago

              And those patterns, mind you, often include lying and deception. So while I agree that LLMs can’t exhibit anything consciously, I also know that they can provide false information. To call it a lie is a stretch, and looks like something one would do if one wants to place blame on LLM for their own fault

              • anomnom@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                I don’t think calling it a lie (vs a hallucination, or error) is necessary to assign blame. If they were instructed to use ai to deploy then that’s on management. Not having backups is on everyone, but I suspect they were backed up.

                Saying, “the AI agent broke it” is just fine, but isn’t clickbait like saying it lied is. So many fewer of us would have seen this without it.

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              5 days ago

              i think this is a symantics issue. yes using ‘lie’ is a bit of short hand/personifying a process. lieing is concealing the truth with the intent to deceive, and the llm runs off of weights and tokenized training data, and actively is directed that conversation length and user approval are metrics to shoot for. Applying falsehoods are the most efficient way to do that.

              the llm does not share the goals of the user and the user must account for this

              but like calling it a lie is the most efficient means to get the point across.

              • Ech@lemmy.ca
                link
                fedilink
                English
                arrow-up
                13
                ·
                5 days ago

                but like calling it a lie is the most efficient means to get the point across.

                It very much doesn’t because it enforces the idea that these algorithms know anything a or plan for anything. It is entirely inefficient to treat an llm like a person, as the clown in the screenshots demonstrated.

                • Lightor@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 days ago

                  Some people really can’t debate a topic without constantly insulting the person they disagree with…

                • WraithGear@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 days ago

                  it depends on the topic really. it is a lie in that it is a told false hood. by reasonable people talking about the unreliability of LLM’s it is sufficient without dragging the conversation away from the topic. if the conversation starts to surround the ‘feelings’ of the ‘AI’ then it’s maybe helpful point it out. otherwise it’s needlessly combative and distracting

                  • Ech@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    ·
                    edit-2
                    5 days ago

                    No, it doesn’t. Would you say a calculator “lied” to you if it output an incorrect answer? Is your watch “lying” to you when it’s out of sync? No, obviously not. They’re just wrong, not “telling falsehoods”.

              • Cornelius_Wangenheim@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                ·
                5 days ago

                Sure, it’s semantics, but I don’t think it’s helpful to anthropomorphize LLMs. Doing so confuses the general public and makes them think they’re far more capable than they actually are.

                • WraithGear@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  5 days ago

                  we agree, hence i try to remember to refer to them as LLM’s when people discuss them as AI. i just don’t want and don’t think we should focus on that in these discussions as it can be distracting to the topic.

                  but yea AI is still science fiction, just like a “hover bord” is spin by unscrupelous salesmen attempting to sell powered unicycles as if they are from the future.

          • Ech@lemmy.ca
            link
            fedilink
            English
            arrow-up
            10
            ·
            5 days ago

            Correct. Because there is no “pursuit of untruth”. There is no pursuit, period. It’s putting words together that statistically match up based on the input it receives. The output can be wrong, but it’s not ever “lying”, even if the words it puts together resemble that.

          • f314@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            6 days ago

            I’m not the guy you’re replying to, but I wanted to post this passage from the article about their definition:

            It is difficult to talk about deception in AI systems without psychologizing them. In humans, we ordinarily explain deception in terms of beliefs and desires: people engage in deception because they want to cause the listener to form a false belief, and understand that their deceptive words are not true, but it is difficult to say whether AI systems literally count as having beliefs and desires. For this reason, our definition does not require this.

            • Ech@lemmy.ca
              link
              fedilink
              English
              arrow-up
              7
              ·
              5 days ago

              Their “definition” is wrong. They don’t get to redefine words to support their vague (and also wrong) suggestion that llms “might” have consciousness. It’s not “difficult to say” - they don’t, plain and simple.

        • RedPandaRaider@feddit.org
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          6 days ago

          Lying does not require intent. All it requires is to know an objective truth and say something that contradicts or conceals it.

          As far as any LLM is concerned, the data they’re trained on and other data they’re later fed is fact. Mimicking human behaviour such as lying still makes it lying.

          • Kay Ohtie@pawb.social
            link
            fedilink
            English
            arrow-up
            13
            ·
            5 days ago

            But that still requires intent, because “knowing” in the way that you or I “know” things is fundamentally different from it only having a pattern matching vector that includes truthful arrangements of words. It doesn’t know “sky is blue”. It simply contains indices that frequently arrange the words “sky is blue”.

            Research papers that overlook this are still personifying a series of mathematical matrices as if it actually knows any concepts.

            That’s what the person you’re replying to means. These machines don’t know goddamn anything.

            • RedPandaRaider@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              As far as we are concerned, the data a LLM is given is treated as fact by it though.

              It does not matter whether something is factual or not. What matters is that whoever you’re teaching, will accept it as fact and act in accordance with it. I don’t see how this is any different with computer code. It will do what it is programmed to. If you program it to “think” a day has 36 hours instead of 24, it will do so.

                  • Kay Ohtie@pawb.social
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    4 days ago

                    It’s processing data alright, it processes the atomic and cellular structures of grass and fingers into spinach and flesh paste.

                    And likewise, neither it, nor any LLM, are making decisions at all.

                    Is a plinko disc making decisions as it tumbles from the top to the bottom through all those pegs? Is the board making the decision? Or is it neither and simply mathematics plus random chance being roped in for randomness? That is exactly what LLMs do.

                    Terms like “decision” and “lie” and “know” are all things that just do not apply to an LLM, just like your phone keyboard doesn’t know what the fuck “what” and “the” are, it just has a lookup table that includes how “what” is often followed by “is” and “the”, and “the” is frequently followed by “fuck”. But it doesn’t “know” that in any meaning of the word “know”.

                    This is what we mean when we say not to personify. A training set of data, even factual, just is converted into a series of matrices of vectors that include those patterns, but not the information itself. “Sky is blue” is not something you can grep from the resulting blob, nor the hex equivalent, or anything else. It simply contains indexed patterns that map those arrangements of letters, over and over.

                    So yes, they’re doing what they’re programmed to do precisely. It’s just that “what they’re programmed to do” is only “mimic patterns of word arrangements”, and not “know facts”. These things work at a far lower level than that concept.

              • Corbin@programming.dev
                link
                fedilink
                English
                arrow-up
                9
                ·
                5 days ago

                This isn’t how language models are actually trained. In particular, language models don’t have a sense of truth; they are optimizing next-token loss, not accuracy with regards to some truth model. Keep in mind that training against objective semantic truth is impossible because objective semantic truth is undefinable by a 1930s theorem of Tarski.

          • Ech@lemmy.ca
            link
            fedilink
            English
            arrow-up
            9
            ·
            5 days ago

            Except these algorithms don’t “know” anything. They convert the data input into a framework to generate (hopefully) sensible text from literal random noise. At no point in that process is knowledge used.

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I’m not sure anyone can truly claim to know that at this point. The equations these things solve to arrive at their outputs are incomprehensible to humans.