• Hazzard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    Eh, it’s a fair point. Not trying something like this is essentially “security by obscurity”, which has been repeatedly proven to be a mistake.

    Wouldn’t surprise me if OpenAI or someone else already had something like this behind closed doors, but now the developers of tools like Nightshade can begin to work on developing AI poison that’s more resilient against these kinds of “cleanup” tools.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      This will be a never ending arms race. There isn’t going to be a permanent obstacle, so all this did was help the bad guys move to the next stage.

      • Hazzard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 days ago

        Exactly, it is an arms race. But if a few students can beat our current best weapons, it’d be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn’t already cracked this and locked everyone involved into serious NDAs.

        Better to know what your enemy has then to just cross your fingers and hope that maybe they didn’t notice this was possible, and have just been letting us poison their precious AI models they’re sinking billions of dollars into. Having this now lets us build the next version of nightshade that isn’t so trivially defeated.

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          You’re completely talking past me. Everyone knew it was a flimsy baracade and that if the LLM companies hadn’t circumvented it they would soon. That doesn’t stop people from continuing to innovate. Publishing the results mean there is a public solution anyone can use.

          Do I think it’s the worst thing that could happen? Not really, but your security through obscurity argument makes no sense in this context and it would probably be better if it wasn’t done and published so every bad actor can use it with minimal effort.

          • Hazzard@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            14 days ago

            Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I’m not nearly as worried about small players, like hobbyists and small companies, who wouldn’t’ve already developed something like this in house.

            And I brought up “security through obscurity” because I’m somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.

            Many of those people releasing decryption methods were considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I’m hopeful that one day we actually could make an unbeatable AI poison, so I’m happy to support research that pushes us towards that end.

            I’m just not satisfied preventing small players from training AI on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.