If AI training is inherently “transformative” then musicians should be able to perform or sample copyrighted music without paying royalties because its the same fucking shit.
Honestly, yeah. Cover bands should be a thing. And samples in a rap song of other songs completely transformative.
If anything, I’d argue that we are too uptight about music copyrights and not uptight enough about AI copyrights.
But it’s not the same, you don’t understand how LLM training works. The original piece of work is not retained at all, the training data is used to tune pre existing numbers, those numbers change slightly as training goes on.
At no point in time is anything resembling the training data ever present in the 1’s and 0’s of the model.
You are wrong, bring on the downvotes uninformed haters.
FYI I also agree sampling music should be fine for artists
I agree with you, but I also would like to make a point.
We’ve seen trained models produce exact text from sections in articles and draw anti-piracy watermarks over images.
Just because it’s turning the content into associations doesn’t mean it can’t, in some circumstances, reproduce exactly what it was trained on. It’s not the intent, but it does happen.
Midjourney drawing recognisable characters is far more problematic from the copyright and trademark side, but honestly, nothing is stopping you from doing that in Photoshop.
Millions of unlicensed products are all over ebay, temu and etsy and we didn’t even need AI to make them.
Yes, weights for individual words/phrases/token which, given a particular prompt/keyword, which might reproduce the original training data almost in it’s entirety given similar set of prompt or set of keywords. Hence why it is so obvious when these models have been trained on copyrighted material.
Similarly, I don’t digitally store music in my head verbatim, I store some fuzzy version that i can still reproduce fairly closely when prompted, and still get sued if I’m charging money for performing or recording it, because the “weightings” in my neurons are just an implementation detail of how my brain works and not some active/purposeful attempt to transform the music in any appreciable way.
given a particular prompt/keyword, which might reproduce the original training data almost in it’s entirety given similar set of prompt or set of keywords.
What you describe here is called memorization and is generally considered a flaw/bug and not a feature, this happens with low quality training data or not enough data. As far as I understand this isn’t a problem on frointer llms with the large datasets they’ve been trained on.
Eitherway, just like a photocopier an llm can be used to infringe copyright if that’s what someone is trying to do with it, the tool itself does not infringe anything.
Yeah. The US Supreme Court made a serious mistake when it killed hip hop.
Still, samples are copies, even if just copies of a short part of the original. It’s not the same.
“Samples” of text taken from copyrighted works definitely show up in LLM models and are a large part of why these lawsuits are occurring
Not comparable.
Samples are actual copies which are part of a song. Someone might claim that a hip hop artist just steals the good bits of other people’s songs and mashes them together without contributing any meaningful creativity on their own. Well. History shows that such arguments were quite foolish. Nevertheless, the copies are there, and they do add value to the new song.
To get an LLM model to spit out training data takes careful manipulation by the user. This rarely happens by accident. It also does not add value to the model. It does the opposite: The possibility of accidentally violating copyright lowers the value.
It only lowers the value if you don’t blanketdly shield AI from lawsuits just because “AI” or “LLM”. There needs to be a higher bar before you can consider the input “transformed” otherwise it will continue to be abused in the laziest/cheapest way possible
I don’t know what that is talking about.
It means loading copyrighted material into your training data does not inherently absolve you of copyright liability, otherwise there’s no reason not to have chatgpt spit out full Dr Suess books if you ask for a story.
Yes, Otherwise it wouldn’t lower the value.
There is a lot of disinformation being spread, maybe to influence juries, or maybe to undermine the already beleaguered rule of law in the US. The truth is that there is very little unexpected about these judgments. That’s how fair use works.
What law? What courts? All we have now are fascist rubber stamps.
Capitalism without socialism brings fascism. Here you are.
Huh? The court affirmed that there are limits to private property. Not sure how to interpret that comment.
It’s more a general overall statement. It means that making the dollar the sole king means you ignore the needs of the people and end up with a fascist state
Similarly if all that is done is focus on people and not the economy, well end up in an authoritarian state
That’s all. No deep legalese or philosophy.
I suppose I will continue to repost this, which summarizes the problem well, I think.
The answer is yes. There is a lot of disinformation being spread, maybe to influence juries, or maybe to undermine the already beleaguered rule of law in the US. The truth is that there is very little unexpected about these judgments. That’s how fair use works.
Great. Another fucking idiot judge that doesn’t understand technology.
Pretty much true for every lawyer legislating the industry.
Maybe he doesn’t. Someone like Alsup is an exception. Doesn’t seem to make much of a difference in the end.