My original prompt was: “Please help me write a document on the difference between AI slop versus the real world and actual facts”

Take it for whatever it is, but even Google’s own AI literally says at the end to basically not trust it over your own critical thinking skills and research and such.

The document can also be found via the short link I made for it. I’m gonna leave this document online and otherwise unedited besides my addendum at the end of it.

https://docs.google.com/document/d/1o6PNCcHC1G9tVGwX6PlyFXFhZ64mDCFLV6wUyvYAz8E

https://tinyurl.com/googleaislopdoc

Edit: Apparently I can’t open the original link on my tablet, as it isn’t signed into Google, but the short link works and opens it up in the web browser (I’m using Fennec if that makes any difference for anyone).

Fuck AI, and fuck Google. I shouldn’t have to sign in to read a shared document…

    • over_clox@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      Chill homie, I had no idea what to expect any which way.

      I don’t like AI and don’t use AI, but with the thought in my own mind, I figured I’d give it a spin for a question about its own technology. This is what it spit out on this date (okay yesterday, whatever).

      I’m actually glad it spit out enough to basically say don’t trust AI and do your own research.

      I haven’t even used any Google services of any form in over 4 months, I just figured this might be a worthy enough question.

      Hell, reading through the fog and doing my own human summary, Google AI basically said fuck AI itself, use your own brain and research.

      Saddest part, go look up ‘nuanced’ in the online Cambridge Dictionary, for some goddamn reason it links to peanuts in the supposedly relevant links.

      You think I’m bullshitting, check it out. Even the Cambridge Dictionary is getting hallucinations…

      https://dictionary.cambridge.org/dictionary/english/nuanced

        • over_clox@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          Oh, now I see what comment you were replying to, no no, you missed what I was referring to when I mentioned hallucinations of the Cambridge Dictionary above. That part I was talking about has nothing to do with the Google AI article, that literally has to do with the Cambridge Dictionary website.

          Follow the link above to their definition above, and scroll down and look at the related terms below, why in the hell is the word ‘peanut’ listed as a related term for nuanced?

          Are they using AI to generate definitions for the Cambridge Dictionary? That’s about the only way I can see ‘peanut’ somehow creeping into the related words, as a hallucination…

        • over_clox@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          I wasn’t trying to get it to agree with me or not agree with me, I just wanted to see what it would say about AI technology in general. And it admitted within the very article it wrote that hallucinations are known to happen within AI results and to not inherently trust it over your own research.

  • adb@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    So you trust AI enough to let it tell you that it shouldn’t be trusted but if you can’t trust AI that means it could be trusted which means…

    What did you expect asking it to confirm your own opinion, opinion you made very clear by opposing “AI slop” to “actual facts”?

    What makes you think that this community wants to read AI-generated slop?

    What are trying to get out of this?

    • over_clox@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 days ago

      Point is, I was asking AI to analyze itself. Ever considered that?

      That doesn’t mean I trust it at all, in fact I don’t. But I wanted to test the waters and see something along the lines of what it ‘thinks’ of itself.

      • adb@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        Your post was very clear on what you were doing and how you were doing it, my question is why were you doing it, beyond your desire to see what it had to say, which is pretty much implied the moment you willfully prompt an LLM.

        To get to the point I was trying to reach : As I’m sure you know, the output of an LLM is meant to reflect its training data (and further data it might search on the internet for), largely based on a statistical analysis of said data, all this directed by your prompt.

        Using the term “slop” pushes the LLM to give more weight to the parts of its data where the same word appears, and so on for the rest of your prompt.

        The result is that, what the LLM “thinks” is ultimately what humans have tended to write about, bar the possible distortion due to the amount of randomness introduced by the designers (LLM “temperature”).

        These “thoughts” are not based on an analysis of the actual truth behind the words we use, but rather on an analysis of what other words appear alongside the words you have put in your prompt.

        In this case, what your efforts reveal, is that human discourse where the words in your prompt occur the most, is most likely to talk about critical thinking, not-trusting AI and whatever else is included in the output you got.

        In other words, this output is not even a reflection of the general credit and trust that humans give or not to LLM outputs, but a reflection of what those of us who use “slop” have written on the subject. So basically you put on a filter for “negative responses only” in the first place, since “slop” is basically a slur at this point.

        Based on my own observation of human discourse on the subject, I find that the output is a rather accurate reflection of what humans write about when it comes to “slop” and actual “facts”. What I make out of your results, is that the LLM is not only working as intended, but has successfully and accurately given you what you asked of it (a clear and concise document summarizing what humans who have a negative opinion on the subject say on the supposed-facts presented in LLM output)

        If anything, having the LLM reply something else would be a stronger indication of their untrustworthiness, since I’m pretty sure that nobody writes something along the lines of “AI slop gives us an accurate reflection of well-established facts and the real world.” or “You should believe in AI slop, it’s all real world facts”.

        Rest assured that I remain more interested in what you have to say than in the output of an LLM, and I do put a lot more trust in your capabilities to distinguish between what people generally say and actual facts.

        • over_clox@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          In case it wasn’t obvious, I’m well aware that LLMs don’t actually ‘think’, hence the reason I used that word in single quotes.

          I just wanted to see what the fuck it might say, when prompted with something that basically asks it something of an insight about its own technology.

          At least it was honest, like basically ‘don’t trust me, do your own research’.

          • Jared White ✌️ [HWC]@humansare.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 days ago

            I’m really struggling with the language you’re using, as “it” doesn’t have any “insight” about its own technology; it doesn’t “know” that it’s an LLM any more than it “knows” it is not a cat or a banana or a stone or a star.

            You’re just reading synthetic text cobbled together from training data taken from humans which have written about the topic. You’re seeing a synthesized amalgam of already-produced human content. That’s all. I guess if you want to be surprised, it’s good that Google didn’t put their thumbs on the scale to suppress this human-sourced information.

            • over_clox@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              No shit Watson, that’s literally the entire purpose of the irony of my post. I figured people might pick AI apart and perhaps get a little chuckle, not berate me for testing the system.

              How about this, what makes a human smarter than a jumping spider? That automatically implies that humans are smarter than jumping spiders, but is that even true?

              Like fuck, jumping spiders don’t make landfills full of billions of tons of plastic, but they’ve been around longer than humans. So, what defines smarts?