• massi1008@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 days ago

    > Build a yes-man

    > It is good at saying “yes”

    > Someone asks it a question

    > It says yes

    > Everyone complains

    ChatGPT is a (partially) stupid technology with not enough security. But it’s fundamentally just autocomplete. That’s the technology. It did what it was supposed to do.

    I hate to defend OpenAI on this but if you’re so mentally sick (dunno if that’s the right word here?) that you’d let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.

    [1] If this was a human encouraging him to suicide this wouldn’t be newsworthy…

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      While I agree, the markdown guide is right there in the editor toolbar along with formatting buttons, and we don’t need to break semantic structure like that.[1]


      1. you can toggle the view source button to see how to write this footnote ↩︎

    • SkyezOpen@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      3 days ago

      You don’t think pushing glorified predictive text keyboard as a conversation partner is the least bit negligent?

      • massi1008@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).

        At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn’t (shouldn’t) blame them for the latest ransomeware attack too.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 days ago

          At some point we have to give the responsibility to the user.

          That is such a fucked up take on this. Instead of seeing the responsibility at the piece of shit billionaires force-feeding this glorified text prediction on everyone, and politicians allowing minors access to smartphones, you turn off your brain and hop straight over to victim-blaming. I hope you will slap yourself for this comment after some time to reflect on it.

    • KoboldCoterie@pawb.social
      link
      fedilink
      English
      arrow-up
      24
      ·
      3 days ago

      If this was a human encouraging him to suicide this wouldn’t be newsworthy…

      Like hell it wouldn’t, do you live under a rock?

    • Live Your Lives@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      I get where you’re coming from because people and those directly over them will always bear a large portion of the blame and you can only take safety so far.

      However, that blame can only go so far as well, because the designers of a thing who overlook or ignore safety loopholes should bear responsibility for their failures. We know some people will always be more susceptible to implicit suggestions than others are and that not everyone has someone who’s responsible over them in the first place, so we need to design AIs accordingly.

      Think of it like blaming an employee’s shift supervisor when an employee dies when the work environment is itself unsafe. Or think of it like only blaming a gun user and not the gun laws. Yes, individual responsibility is a thing, but the system as a whole has a responsibility all it’s own.

      • lmmarsano@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        we need to design AIs accordingly

        No, we don’t. The harm was self-inflicted. The reader had unlimited time to contemplate their actions before committing them. This is entirely on the user.

        • Live Your Lives@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Say I custom built a gun and gave it to someone I know who obviously has severe mental issues: do you think I would have no responsibility for the actions that other person takes? By your logic it seems like I shouldn’t, since the mentally ill person has all the time in the world before they end up making a terrible decision.

          • lmmarsano@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 hours ago

            do you think I would have no responsibility for the actions that other person takes?

            Yep: not your problem. That person needs a handler, and you’re not theirs. We can give people informed choices. We have no duty to defend someone from their own irrationality.

    • missingno@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      If this is what ChatGPT is “supposed to do” then that’s the problem. A yes-man that will say yes to anything, even suicide, is dangerous.