Exactly, it is an arms race. But if a few students can beat our current best weapons, it’d be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn’t already cracked this and locked everyone involved into serious NDAs.
Better to know what your enemy has then to just cross your fingers and hope that maybe they didn’t notice this was possible, and have just been letting us poison their precious AI models they’re sinking billions of dollars into. Having this now lets us build the next version of nightshade that isn’t so trivially defeated.
You’re completely talking past me. Everyone knew it was a flimsy baracade and that if the LLM companies hadn’t circumvented it they would soon. That doesn’t stop people from continuing to innovate. Publishing the results mean there is a public solution anyone can use.
Do I think it’s the worst thing that could happen? Not really, but your security through obscurity argument makes no sense in this context and it would probably be better if it wasn’t done and published so every bad actor can use it with minimal effort.
Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I’m not nearly as worried about small players, like hobbyists and small companies, who wouldn’t’ve already developed something like this in house.
And I brought up “security through obscurity” because I’m somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.
Many of those people releasing decryption methods were considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I’m hopeful that one day we actually could make an unbeatable AI poison, so I’m happy to support research that pushes us towards that end.
I’m just not satisfied preventing small players from training AI on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.
Exactly, it is an arms race. But if a few students can beat our current best weapons, it’d be terribly naive to think the multiple multi-billion dollar companies, sinking their entire futures into this, and also already amoral enough to be stealing content en masse from the entire internet, hadn’t already cracked this and locked everyone involved into serious NDAs.
Better to know what your enemy has then to just cross your fingers and hope that maybe they didn’t notice this was possible, and have just been letting us poison their precious AI models they’re sinking billions of dollars into. Having this now lets us build the next version of nightshade that isn’t so trivially defeated.
You’re completely talking past me. Everyone knew it was a flimsy baracade and that if the LLM companies hadn’t circumvented it they would soon. That doesn’t stop people from continuing to innovate. Publishing the results mean there is a public solution anyone can use.
Do I think it’s the worst thing that could happen? Not really, but your security through obscurity argument makes no sense in this context and it would probably be better if it wasn’t done and published so every bad actor can use it with minimal effort.
Mhm, fair enough, I suppose this is a difference in priorities then. Personally, I’m not nearly as worried about small players, like hobbyists and small companies, who wouldn’t’ve already developed something like this in house.
And I brought up “security through obscurity” because I’m somewhat optimistic that this can work out like encryption has, where tons of open source research was done into encryption and decryption, until we worked out encryption standards that we can run at home that are unbreakable before the heat death of the universe with current server farms.
Many of those people releasing decryption methods were considered villains, because it made hacking some previously private data easy and accessible, but that research was the only way to get to where we are, and I’m hopeful that one day we actually could make an unbeatable AI poison, so I’m happy to support research that pushes us towards that end.
I’m just not satisfied preventing small players from training AI on art without permission while knowingly leaving Google and OpenAI an easy way to bypass it.
Yeah there’s the difference. I’m not convinced there is a robust poison but I’d love to be wrong
Amen to that, here’s to hoping.