• floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      82
      ·
      edit-2
      6 days ago

      Previously they would have had to encounter a person who wanted to manipulate them. Now there’s a widely marketed technology that will reliably chew these vulnerable people up.

      • Steve@startrek.website
        link
        fedilink
        English
        arrow-up
        61
        ·
        6 days ago

        Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          6 days ago

          And there are countless AI hype bros who will just dismiss all of this and call the people who fall into this morons.

          It’s really insidious.

          • Amnesigenic@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            Tbf the people who fall for this are morons, but that doesn’t mean they deserve to be fucked over

            • paraphrand@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              5 days ago

              I don’t know that they always are. It’s easy from our nerd bubble to dismiss AI and LLMs because we understand their limitations and how they work to an extent.

              We shouldn’t look down on anyone who takes the advertising and idea that these are “intelligence” at face value. The disclaimers that say that the intelligence is fallible, just like us, are never as strongly worded as they should be. If the AI companies made things clearer, they would be de-hyping their products.

              I dunno, this whole thing is unprecedented. And the hype around it all, taken at face value, is irresponsible and misleading.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    46
    ·
    5 days ago

    OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

    Buuuuulshit

    Open AI needs people to be as addicted as possible as it uses the Facebook model of business only with N times the investment behind it so it needs users to use more at any cost, and these CEO’s being the psychopaths that they are, they don’t give a shit about things like consequences

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      5 days ago

      This is like any matchmaking app genuinely attempting to match you with “the one” through AI, algorithms, science, etc so when you meet the perfect person you stop giving the app money.

      I got lucky and married my fuck buddy that I met on Tinder. But that is not a good business plan. Why would OpenAI drive people to stop using their product.

      I’m a functional alcoholic. Last I checked booze companies aren’t reaching out to me to stop buying booze because they care about my personal health or mental wellbeing…

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    115
    ·
    6 days ago

    Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

    Another case from the article:

    “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

    What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.

    • shinratdr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      33
      ·
      5 days ago

      I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.

    • scytale@piefed.zip
      link
      fedilink
      English
      arrow-up
      51
      ·
      6 days ago

      There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Some big “No hallucinations” vibes coming here.

      Some people really think skills etc are golden laws that can’t be broken. Rather they’re minor suggestions that an LLM will happily throw out as like you said it doesn’t understand words.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      There are no more philosophical discussions.

      Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…

    • [object Object]@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Put this prompt into ChatGPT (e.g. on duck.ai), then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.

      prompt

      System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

      (People say that some more concise and less masturbatory prompts also work, but I don’t follow discussions of that.)

    • a_non_monotonic_function@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.

      I can fix her…

  • Unpigged@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    It’s worrying how often I see news like that where they elaborate on human traits like acceptance and “understanding” of the model.

    Could it be that our society had disconnected from emotion so far that any synthetic simulacra of a real compassion makes vulnerable people swallow it bait, line and sinker?

  • CTDummy@aussie.zone
    link
    fedilink
    English
    arrow-up
    80
    ·
    6 days ago

    He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

    He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

    Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

    Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

    “It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

    Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

    That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

    I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      27
      ·
      6 days ago

      Agreed, but I think it’s also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that’s a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren’t necessarily helpful

    • Aatube@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      mental healthcare field being practically non-existent in most countries

      I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

      • lagoon8622@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        In some countries you can call the uniformed officers of peace and let them know you’re having a problem and they’ll come out and shoot you. If they could teleport to my location they could solve a lot of my problems quite quickly

      • CTDummy@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Being able to frequently access psychologist, psychiatrist and counselling would mean old mate could have at least been guided towards more healthy avenues of addressing his loneliness. Especially when it is subsidised by healthcare. The amount of stuff I’ve had come up and then addressed, or not realise I was doing for reasons beyond what I thought in counselling when I went, is a good amount. Even just the process explaining your thought process is often enough to make you reevaluate things. His partner could have asked for him to be referred during his spiral, when he had his episode during his spiral he could have then sought help himself if these service were available and readily accessible.

    • Spacehooks@reddthat.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      The voice bot is so so so so so much worse than the chat bot on top of it. I do not know how he could ever have held a conversation with that thing. Honestly, i don’t fucking believe it.

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    22
    ·
    5 days ago

    “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

    See, I never understood this. Mine could never even follow simple instructions lol

    Like I say “Give me a list of types of X, but exclude Y”

    "Understood!

    #1 - Y

    (I know you said to exclude this one but it’s a popular option among-)"

    lmfaoooo

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      5 days ago

      That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

      -You have a conversation with a model.

      -Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.

      -You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      I’ve experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so “whose a good boy!!!” annoying.

      People don’t talk like these chatbots do, their training data that was stolen from humanity definitely doesn’t contain that, that is “behavior” included by the providers to try and make sure that people get as hooked as possible

      Gotta make back those billions of investments on a dead end technology somehow

    • OctopusNemeses@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      It makes more sense when viewed as a fancy autocomplete, not an intelligence. There’s no intelligence behind it that is reading your statement and understanding your meaning. It’s responding with text that mathematically likely matches some sort of reply that would fit your statement.

      Your statement included Y and the algorithm landed on result that includes Y. There’s no intelligence that could understand that you meant no Y.

      That bullshit about the model getting fine tuned just means they are data mining you. It doesn’t make the more LLM intelligent. All it does is add your data to their dataset of which the LLM can draw from for possible future replies. The fundamental limitations of the technology still exists.

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    39
    ·
    6 days ago

    It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

    How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

    • DireTech@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 days ago

      If you pay for them via Openrouter or something then you’ve got an enormous window to work with. Gets more and more expensive as the history increases though.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      I’ve heard from other people that it adopts specific writing patterns and behaviors from the people using it. I think ChatGPT saves and summarizes chat conversations to personalize the chatbot, but I’m not sure since I don’t use it myself.

    • ikt@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      when did you last use chatbox?

      even the last of the pack mistral has memories

        • ikt@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 days ago

          weird, i don’t have that experience at all

          claude in particular is a huge step up above the others

          • CompactFlax@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            To be fair haven’t tried that one. Gemini started bringing in unrelated, previous shit to a recent conversation, which is the first time I’ve experienced that.

            • ikt@aussie.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              ah ive been degoogling for years now, only maps and youtube left

              claude for sure no1 to me but i haven’t ofc compared to gemini, qwen is a chronic over thinker, glm is not bad

              mistral seems like it’s a year behind the sota models, still in its “confidently incorrect can’t double check things” phase

              whereas others seem to be more like hrmm is this right? let me search web to be sure

              • CompactFlax@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 days ago

                Same, but Gemini was the best of the lot about six months ago and it’s where I go these days for brain dead searching.

                I’ll give Claude a go next week. I do try to avoid them, but sometimes I have a question that just isn’t keyword search-able.

  • SeductiveTortoise@piefed.social
    link
    fedilink
    English
    arrow-up
    30
    ·
    6 days ago

    No really, we should pour more money into this. Such a good idea 🫩

    It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.

  • JATth@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    I have recently realized that a sum negative knowledge situation can exist, and this is a thing with “AI”. The work the AI does may actually reduce the useful knowledge. It’s like you have built a working fusion reactor, but have zero knowledge how to replicate it or able to explain why it works.

    The point this happens to a person, means she/he can’t be trusted with the tech and should stay far away from it.

    The negative knowledge pit can be so deep that some people are unable to escape from it, and start confidently believing in the (AI injected) garbage like it’s their own thoughts…

  • Internetexplorer@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 days ago

    AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.

    But that happens maybe 10% of the time. Other times it is mostly right.

    So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        There’s a kind of law here that should be named IMO when dealing with LLMs:

        In a long enough interaction with an LLM the probability that it generates a very incorrect, borderline insane response approaches 100%.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that’s a 40% chance of being wrong in some way.

        I don’t have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you’re measuring.