• fluxion@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    If AI training is inherently “transformative” then musicians should be able to perform or sample copyrighted music without paying royalties because its the same fucking shit.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Honestly, yeah. Cover bands should be a thing. And samples in a rap song of other songs completely transformative.

      If anything, I’d argue that we are too uptight about music copyrights and not uptight enough about AI copyrights.

    • Zetta@mander.xyz
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      3 days ago

      But it’s not the same, you don’t understand how LLM training works. The original piece of work is not retained at all, the training data is used to tune pre existing numbers, those numbers change slightly as training goes on.

      At no point in time is anything resembling the training data ever present in the 1’s and 0’s of the model.

      You are wrong, bring on the downvotes uninformed haters.

      FYI I also agree sampling music should be fine for artists

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I agree with you, but I also would like to make a point.

        We’ve seen trained models produce exact text from sections in articles and draw anti-piracy watermarks over images.

        Just because it’s turning the content into associations doesn’t mean it can’t, in some circumstances, reproduce exactly what it was trained on. It’s not the intent, but it does happen.

        Midjourney drawing recognisable characters is far more problematic from the copyright and trademark side, but honestly, nothing is stopping you from doing that in Photoshop.

        Millions of unlicensed products are all over ebay, temu and etsy and we didn’t even need AI to make them.

      • fluxion@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Yes, weights for individual words/phrases/token which, given a particular prompt/keyword, which might reproduce the original training data almost in it’s entirety given similar set of prompt or set of keywords. Hence why it is so obvious when these models have been trained on copyrighted material.

        Similarly, I don’t digitally store music in my head verbatim, I store some fuzzy version that i can still reproduce fairly closely when prompted, and still get sued if I’m charging money for performing or recording it, because the “weightings” in my neurons are just an implementation detail of how my brain works and not some active/purposeful attempt to transform the music in any appreciable way.

        • Zetta@mander.xyz
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          given a particular prompt/keyword, which might reproduce the original training data almost in it’s entirety given similar set of prompt or set of keywords.

          What you describe here is called memorization and is generally considered a flaw/bug and not a feature, this happens with low quality training data or not enough data. As far as I understand this isn’t a problem on frointer llms with the large datasets they’ve been trained on.

          Eitherway, just like a photocopier an llm can be used to infringe copyright if that’s what someone is trying to do with it, the tool itself does not infringe anything.

      • fluxion@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        “Samples” of text taken from copyrighted works definitely show up in LLM models and are a large part of why these lawsuits are occurring

        • General_Effort@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          Not comparable.

          Samples are actual copies which are part of a song. Someone might claim that a hip hop artist just steals the good bits of other people’s songs and mashes them together without contributing any meaningful creativity on their own. Well. History shows that such arguments were quite foolish. Nevertheless, the copies are there, and they do add value to the new song.

          To get an LLM model to spit out training data takes careful manipulation by the user. This rarely happens by accident. It also does not add value to the model. It does the opposite: The possibility of accidentally violating copyright lowers the value.

          • fluxion@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            It only lowers the value if you don’t blanketdly shield AI from lawsuits just because “AI” or “LLM”. There needs to be a higher bar before you can consider the input “transformed” otherwise it will continue to be abused in the laziest/cheapest way possible

              • fluxion@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                It means loading copyrighted material into your training data does not inherently absolve you of copyright liability, otherwise there’s no reason not to have chatgpt spit out full Dr Suess books if you ask for a story.

                • General_Effort@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  2 days ago

                  Yes, Otherwise it wouldn’t lower the value.

                  There is a lot of disinformation being spread, maybe to influence juries, or maybe to undermine the already beleaguered rule of law in the US. The truth is that there is very little unexpected about these judgments. That’s how fair use works.

  • thedruid@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    What law? What courts? All we have now are fascist rubber stamps.

    Capitalism without socialism brings fascism. Here you are.

      • thedruid@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        3 days ago

        It’s more a general overall statement. It means that making the dollar the sole king means you ignore the needs of the people and end up with a fascist state

        Similarly if all that is done is focus on people and not the economy, well end up in an authoritarian state

        That’s all. No deep legalese or philosophy.

    • General_Effort@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      The answer is yes. There is a lot of disinformation being spread, maybe to influence juries, or maybe to undermine the already beleaguered rule of law in the US. The truth is that there is very little unexpected about these judgments. That’s how fair use works.