• karashta@piefed.social
    link
    fedilink
    English
    arrow-up
    97
    ·
    4 days ago

    You can always tell the people with no artistic talent because they don’t understand how AI is different than digital art software like PhotoShop. And they seem to think that artists should just accept having their life’s work stolen and vomited up as slop.

    Fuck anyone who thinks like this. They think they are entitled to my creativity without doing any of the work.

    “Everyone is doing it.” The absolute degeneration of morality in this era is mind boggling. Have no morals, seek only profit. The fact that so many people cannot take a stand for integrity because of perceived pragmatism is sickening.

    I hope anyone that thinks like this gets the AI slop filled hell they deserve. And I hope their careers are the next to be axed and replaced by the plagiarism machines.

    • shani66@ani.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 days ago

      The worst are those people that think they are artists because they typed in a prompt. It’s delusional!

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      4 days ago

      I guess it’s easy to win an argument if you put extreme views in everyone’s mouth and argue against that.

      I doubt anyone thinks AI has more value then human made. Most are just being pragmatic, knowing that AI isn’t going away and most indie teams don’t have the budget for a dedicated texture guy. There is simply more to gain then to lose, and applauding copyright companies and data aggregators doesn’t solve the issues but just gives a handful of companies a monopoly when they push legislation with the help of your fervent support.

      • Saizaku@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        21
        ·
        4 days ago

        AI companies are the biggest data aggregator though and they indiscriminately scrape literally everything. I am personally completely against copyright and patent law specifically. But sometimes, like in this case, they can be necessary tools. There are probably better ways to protect against AI but none that are recognized in our current framework of how society functions. AI companies are literally stealing everything ever posted online, cause they couldn’t exist without all the data, and then selling it back to people in form of tools while destroying the environment in the process with increasingly gigantic and powerhungry data centers. While also destroying the tech consumer market in the process by buying up components or straight up component producers and taking them off the consumer market.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          4 days ago

          By data aggregators, I strictly mean websites like Reddit, Shutterstock, deviant Art, etc. Giving them the keys would bring up the cost of building a state of the art model so that any open sourcing would be literally impossible. These models already cost in the low millions to develop.

          Take video generation for instance, almost all the data is owned by YouTube and Hollywood. Google wanted to charge 300$ a month to use it but instead, we have free models that can run on high end consumer hardware.

          Scraping has been accepted for a long time and making it illegal would be disastrous. It would make the entry price for any kind of computer vision software or search engine incredibly high, not just gen AI.

          I’d love to have laws that forced everything made with public data to be open source but that is not what copyright companies, AI companies and the media are pushing for. They don’t want to help artists, they want to help themselves. They want to be able to dictate the price of entry which suits them and the big AI companies as well.

          I’m all for laws to regulate data centers and manufacturing, but again, that’s not what is being pushed for. Most anti-AI peeps seem the be helping the enemy a lot more then they realize.

          • Zos_Kia@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            I’m all for laws to regulate data centers and manufacturing, but again, that’s not what is being pushed for. Most anti-AI peeps seem the be helping the enemy a lot more then they realize.

            I’m guessing there’s a lot of controlled opposition which is incredibly cheap to produce, doesn’t leave much of a paper trail, and is reasonably effective.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Yeah the AI slop hell that gives us terrible slop like expedition 33. Shudder at the thought

            • Zos_Kia@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              But i was responding to a specific comment with well enough context to understand what i mean. If something’s unclear about it you can ask about it in a non passive agressive way and maybe i can help you out.

              • Nate Cox@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                You made an assertion that AI “gave us” expedition 33.

                The article explains that the e33 dev team did try genAI and didn’t like it so they didn’t continue using it.

                Ergo, genAI did not give us e33.

  • iAmTheTot@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    58
    ·
    4 days ago

    “When AI first came out in 2022, we’d already started on the game. It was just a new tool, we tried it, and we didn’t like it at all. It felt wrong.”

    I’m willing to give them the benefit of the doubt on this one. I’m pretty hardcore anti-AI these days, but when it was just hitting the masses and it was the shiny new toy, I was ignorant about the specifics and tried it out here and there. So this specifically resonates with me.

    Broche then drew a line in the sand. He mused that it would be hard to predict how AI might be used in the gaming industry in the future, and declared, “But everything will be made by humans, by us.”

    I hope they stick to their word on this, but only time will tell in that regard.

  • yermaw@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I really hope their next game smashes it too after this statement. Has potential to pave the way for the rest of the industry.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    4 days ago

    The anti AI crowd is getting crazy. Everyone uses it during development.

    It’s a tool for fuck’s sake, what’s next? Banning designers from using Photoshop because using it is faster and thus taking jobs from multiple artists who would have to be employed otherwise?

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I did and you’re right. That’s why I’m firmly in the “it’s just a fucking tool” gang.

        Both people who treat it like a messiah and those who treat it like the worst thing ever seem pretty much insane to me.

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 days ago

          You are 1000% correct. I’ve been yelled at or have witnessed a few times people making a huge stink but clearly can’t differentiate between the types of “ai” to even know what they are complaining about.

          They just know “AI = Bad” so they get their pitchforks out.

    • woelkchen@lemmy.world
      link
      fedilink
      English
      arrow-up
      59
      ·
      4 days ago

      Use your AI generation all you want but don’t enter a painting contest using machine generated content trained on other people’s work without their consent.

      • village604@adultswim.fan
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        Do human artists usually get consent before training on content freely available on the Internet?

        There are plenty of reasons to hate on AI, but this reason is just being pissed that a silicon brain did it instead of a carbon one.

        • PerogiBoi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          27
          ·
          4 days ago

          The fact that you’re comparing human artists to slop machines is really sad. There is no “silicone brain” making any of this stuff. I think you should take a few minutes and learn how this stuff works before making these comparisons.

          • village604@adultswim.fan
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 days ago

            Right, because computers don’t use silicone.

            But Gen AI is modeled after the way the brain works, so maybe you need to learn how it works before arguing against an accurate comparison.

            • PerogiBoi@lemmy.ca
              link
              fedilink
              English
              arrow-up
              16
              ·
              edit-2
              4 days ago

              Wow thank you for this comment. It helps detail your level of knowledge on this subject, which is very helpful to myself and others. There is nothing else to discuss here on my end.

              • village604@adultswim.fan
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                4 days ago

                Alrighty, so generative AI works by giving it training data and it transforms that data and then generates something based on a prompt and how that prompt is related to the training data it has.

                That’s not functionally different from how commissioned human artists work. They train on publicly available works, their brain transforms and stores that data and uses it to generate a work based on a prompt. They even often directly use a reference work to generate their own without permission from the original artist.

                Like I said, there are tons of valid criticisms against Gen AI, but this criticism just boils down to “AI bad because it’s not a human exploiting other’s work.”

                And all of this is ignoring the fact that ethically trained Gen AI models exist.

                • Nate Cox@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  13
                  ·
                  4 days ago

                  GenAI is a glorified Markov Chain. Nothing more.

                  It is a stochastic parrot.

                  It does not think, it is not capable of creating novel new works, and it is incapable of the emotion necessary to be expressive.

                  All it can do is ingest content and replicate it. This is not the same as a human seeing someone’s work and being inspired by it to create something uniquely their own in response.

            • starman2112@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              8
              ·
              4 days ago

              And that means humans don’t learn art the same way a machine trains on data. Even if they learn from other artists, a human’s artistic output is novel and original.

              • village604@adultswim.fan
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                How exactly is a generated image not novel? You’re not going to get the same image twice with the same prompt. Everything it generates will be original. It’s not like they’re just providing you with an existing image.

                And still the argument I’m hearing is that it’s fine for humans to use artistic works without consent or credit just because it’s a human doing it.

                Just because the underlying processes are different doesn’t mean the two are functionally different.

                I also think it’s funny because I’m betting the Venn diagram of people who think AI using publicly available artwork to train on is bad and people who think piracy is good is almost a single circle.

    • wrinkledoo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      39
      ·
      4 days ago

      An unethically developed tool that’s burning the planet faster with the ultimate goal of starving the working class out of society.

      Inb4 alarmism lol tell me the fucking lie if you can.

        • AlexanderTheDead@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Yes, it is indeed embarrassing that you insist on defending AI.

          My sister insisted on using an AI this year to generate our secret santa.

          She didn’t get a gift. Ahahahaha.

          AI is strictly for stupid people that can’t do good work on their own. No reasonably intelligent person is using AI to make a draft that they can then correct, because it will always be more practical to just make it yourself.

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I mean, one of us is “defending” use of a technology that helps people, the other is publicly bashing their sister, implying she’s stupid (which might or might not be true, not the point). I’ll leave the conclusion to other readers, I have no more to say to a person like you.

            • AlexanderTheDead@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              More-so I’m concerned for my sister, but yes, I did “publicly” bash her. On lemmy. Lol. The absurdity of your moral outrage is astounding.

              My sister is not above criticism because I care about her.

              But don’t worry, you didn’t raise any good points and you failed to deflect away from my poignant anecdote. The readers will indeed side against you, because you have failed to produce a sound argument.

    • IcyToes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      ·
      4 days ago

      Not everyone, and it probably multiplies review time 10 fold. Makes maintenance horrible. It doesn’t save time, just moves it and makes devs dumber and unable to justify coding choices the AI generates.

      • fahfahfahfah@lemmy.billiam.net
        link
        fedilink
        English
        arrow-up
        19
        ·
        4 days ago

        We’re pushed to use AI a lot at our job and man is it awful. I’d say maybe 20-30% of the time it does okay, the other 70% is split between it just making shit up, or saying that it’s done something it hasn’t.

        • iAmTheTot@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          4 days ago

          I’m in an entirely different industry than the topic at hand here, but my boss is really keen on ChatGPT and whatnot. Every problem that comes up, he’s like “have you asked AI yet?”

          We have very expensive machines, which are maintained (ideally) by people who literally go to school to learn how to. We had an issue with a machine the other day and the same ol’ question came up, “have you asked AI yet?” He took a photo of the alarm screen and fed it to ChatGPT. It spit out a huge reply and he forwarded it to me and told me to try it out.

          Literally the first troubleshooting step ChatGPT gave was nonsense and did not apply to our specific machine and our specific set-up and our specific use-case.

          • Almacca@aussie.zone
            link
            fedilink
            English
            arrow-up
            6
            ·
            4 days ago

            The only question I’ve asked chatgpt recently was how to delete my account, and it couldn’t even get that right. It told me to click on the profile button on the top right of the screen. The profile button was on the bottom left, and looked more like a prompt to upgrade to a paid version. Fucking useless.

          • IcyToes@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 days ago

            “I will be investigating this shortly.”

            That way, you don’t have to commit to AI and can distance a bit from the micromanagement. If he persists. “I have a number of avenues I’d like to go down and will update on progress tomorrow”.

            Though I’d be tempted to flippant, “if you’re feeling confident to pick it up, I’m happy to review it”. If they hesitate, “that’s OK, I’ll go through the process.”

            Standups should be quick. Any progress, any issues, what you’re focussing on. Otherwise you waste everyone’s time. Any messages I’ll ignore until I have 5 mins. Micromanagement environments are not worth it.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I mean, it’s a tool. You can use a hammer to smash someone’s skull in or you can use it to put some nail on a wall.

        If you see it used like that, it’s shitty developers, the AI is not to blame. Don’t get me wrong, I do have coworkers who use it like this and it sucks. One literally told to next time tell Copilot directly what to fix when I’m doing a review.

        But overall it helps if you know how and most importantly when to use it.

    • Nate Cox@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      ·
      4 days ago

      “Everyone uses it” is just such a dumb argument.

      I don’t use it, I’ve never committed any code written by genAI. My colleagues don’t use it. Many, many people choose not to use it.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I didn’t mean it in the literal sense but if it makes you happy, we can pretend that whenever someone says “everyone” they mean it literally.

        • Nate Cox@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          Bullshit. “Everyone” in your comment was a clear “appeal to popularity” argument, made to other anyone not on the bandwagon.

          It deserves to be called out for what it is.

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Oh, someone did read a Wikipedia article! No, it was not “appeal to popularity,” it was meant to show that you’d be hard to find a product where at least small part wasn’t generated with AI, meaning singling this company out because they admitted it is bullshit.

              • Rikudou_Sage@lemmings.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                23 hours ago

                So, correcting your nonsense is “ad hominem” now, huh? Newsflash: If you’re wrong and someone corrects you, it’s not ad hominem. And what the hell is the nonsense about moving the goalpost? Do you read what you type or do you just tell an AI to write your comments for you?

        • AlexanderTheDead@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Well, you see, the “everyone” you are referring to are the same stupid masses that already don’t deserve respect on the macro level. The same stupid masses voting in politicial officials like Donald Trump.

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Conflating multiple things together, are we?

            But hey, everyone who doesn’t share your exact world view on all issues is a Nazi!

            • AlexanderTheDead@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I mean I guess I can get how you think that’s what I said?

              I’m more so just commenting on the fact that “everyone” is like a majority dumbasses.

    • UltraMagnus@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I’m not going to fault someone for driving to work in a car, but I certainly wouldn’t call them the winner of a marathon even if they only drove for a few minutes of that marathon.

      There’s a difference between something that runs the race for you (LLM AI) and something that simply helps you do what you are already doing (I suppose photoshop is the equivalent of drinking gatorade).

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I don’t think that’s a relevant comparison - marathon is a race meant specifically to test what the human body is capable of. Using a car there is obviously against the goal of the competition.

        When I’m writing code, I’ll happily offload the boring parts to AI. There’s only so many times you can solve the same problem without it being boring. And I’ve been doing this long enough that actually new problems I haven’t solved yet are pretty rare.

        • UltraMagnus@startrek.website
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 days ago

          I’m not going to fault you for that - but do you think you should receive an award for the work you didn’t do? Even if you only use the car on the “easy” parts of the race that nobody cares about?

          In the case of this particular game, perhaps the bulk of the creative work was done by humans. But if the GOTY committee isn’t confident with drawing a line between what work is OK to offload onto an AI and what work isn’t, then I think it’s fair for then to say this year that any generative AI use is a disqualifier.

          You and I can say with ease that an implementation of a basic swap function (c=a, a=b, b=c) doesn’t require any creative work and has been done to death, so there’s no shame in copypasteing something from stackoverflow or chatgpt into your own code to save time.

          But it’s harder to gauge that for more complex things. Especially with art - where would you draw the line? Reference material? Concept art? Background textures or 3d models of basic props (random objects in the scene like chairs, trees, etc)?

          I don’t think there’s a clear answer for that. You might have an answer you think is correct, and I might have one as well, but I think it will be difficult and time consuming to achieve consensus in the game development community.

          So, the most efficient answer for now is to have any generative AI be a disqualifier.

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I generally don’t write code to get an award, it’s just a tool to implement a business requirement.

            Sure, the committee can decide that absolutely no AI can be used, doesn’t mean I can’t call it out as stupid because I disagree with it. They’re right to do that and I can not like it. It’s not like I’m sending death threats or anything, I just stated my opinion.

            I wouldn’t draw the line at all. If you just tell AI “give me a texture for this and that” it will look like shit most of the time. You either have to be lucky or edit it anyway. Let alone if you want consistency across multiple textures/models/whatever. So, if the end result is good, I don’t particularly care.

            I never claimed my answer is the One True Answer™ and I obviously think my answer is correct as does everyone else who has an opinion. And I’m pretty sure the consensus can only be one of two things: either the AI companies crash because it’s financially unsustainable to provide the services or every game company uses it to generate various amounts of code/models/textures etc.

            I don’t think having AI be disqualified is the most efficient answer at all, nor do I think it’s an efficiency issue at all. The companies don’t care about that. They pay analysts to tell them which of these decisions will bring them more money. For now the analysts decided to play it safe and disqualify it. One day one young analyst with something to prove will say “well, everyone’s disqualifying it, why don’t we try a different way” and slowly everyone’s gonna allow it. IMO, anyway.

    • halcyoncmdr@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 days ago

      Most of the major software development tools have some form of AI-based assistance features now. And the room those, be nearly all have those assistance and completion features enabled by default.

      If you want absolutely no AI in your games, then you need to verify all of those functions were disabled for the entire development time. And you have no way to verify that.

      • woelkchen@lemmy.world
        link
        fedilink
        English
        arrow-up
        35
        ·
        4 days ago

        So it’s safe to assume all code generation was trained on GPL code from GitHub and therefore the game code is derived work of GPL code and therefore under GPL itself? So decompilation and cracking is fine?

          • woelkchen@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            4 days ago

            Please quote me the line where this covers machine generation as well? I’d love to sell Google translated Harry Potter books for being transformative work. Maybe I can transform the lastest movie releases to MKV and sell those.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              4 days ago

              transformative use or transformation is a type of fair use that builds on a copyrighted work in a different manner or for a different purpose from the original, and thus does not infringe its holder’s copyright.

              You can use a book to train an AI model, you can’t sell a translation just because you used AI to translate it. These are two different things.

              Collage is transformative, and it uses copyrighted pictures to make completely new works of art. It’s the same principle.

              It’s also important to understand that it’s a tool. You can create copyright infringing content with word, google translate or photoshop as well. The training of the model itself doesn’t infringe on current copyright laws.

              • woelkchen@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                Not a single line in your comment offers anything that machine generation, which is not at all human creative work, falls under fair use.

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  4 days ago

                  It uses the content in a different way for a different purpose. The part I highlighted above applies to it? Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.

    • Amoxtli@thelemmy.club
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      The anti-AI people will be forced to use it due to capitalism. They’ll be pissing against the wind if they didn’t.

      • Inevitable Waffles [Ohio]@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 days ago

        Not really. We just have to wait long enough for either enough disasters to occur that the crowd successfully rejects it; or the current crop of workers will be so unable to accomplish simple tasks without it, the rest of us will just move up the ladder past you. They’ll ask ChatGPT, “how to spreadsheet.”, because they just can’t remember since use of LLMs has been creating cognitive decline in users. Those of us who use our brains, rather than the stolen knowledge and hallucinated regurgitation of a blind database, will be the drivers in the work force.

  • Cyberspark@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Just to point out, LLMs are genAI. Lots of code editors provide code suggestions similar to autocorrect/text suggestions using AI. Strictly I doubt any game is made without AI. Not to say it can’t be deliberately avoided, but given the lack of opposition to GPT and LLMs I don’t see it being considered for avoidance in the same way as art.

    So Awards with constraints on “any AI usage in development” probably disqualifies most modern games.

    • Metype @pawb.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      Code analysis and suggestion tools in many professional IDEs are not powered by LLMs, in the IDEs I use, there’s an available LLM that I’ve disabled the plugin for (and never paid for so it did nothing anyways). LLMs are simply too slow for the kind of code completion and recommendation algorithms used by IDEs and so using them is not “using genAI”

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Uh… Sorry but no, LLMs are definitely fast enough. It works just like auto complete, except sometimes it’s a genius that pulls the next few lines you were about to write out of the ether, and sometimes it makes up a library to do something you never asked for

        Mostly it works about as well as code completion software, but it’ll name variables much better

        • Boakes@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I believe you that genAI is being used for code suggestions, but I wouldn’t call it a genius.

          This is anecdotal but over the last couple of years I’ve noticed Visual Studio’s autocomplete went from suggesting exactly what I wanted more often than not to just giving me hot garbage today. Like even when I’m 3-4 lines in to a very obvious repeating pattern it’ll suggest some nonsense variation on it that’s completely useless. Or just straight making up variable and function names that don’t exist, or suggesting inputs to function calls that don’t match the signature. Really basic stuff that any kind of rules-based system should have no problem with.

          • theneverfox@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            I wouldn’t call it a genius either, it’s just all over the place. Sometimes it’s scary good and predicts your next move, most of the time it’s just okay, sometimes it’s annoyingly bad

            • Zos_Kia@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 days ago

              My last job was on a fairly large typescript codebase (few hundred Klocs) which we started some time before LLMs were a thing. While we weren’t into academic engineering patterns and buzzwords, we were very particular in maintaining consistent patterns across the codebase. The output of Copilot, even with early models which were far from today’s standards, was often scarily accurate. It was far from genius but i’m still chasing that high to this day, to me it really indicated that we had made this codebase readable and actionable even by a new hire.

    • cassandrafatigue@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 days ago

      “Ai” does not exist.

      There is no such thing. It does not exist at presence. Full stop.

      Genetic algorithms and complex algorithms are neat. They’re useful tools. They’re not “ai”, they’re algorithms. It’s fine.

      Large X models and diffusion models are bullshit machines. They were invented as bullshit machines. They are scams. Their evangelists are scammers. They do not ‘generate’ except in the way aspark gap jammer ‘generates’. They’re noise machines.

      You do not know what the fuck you’re talking about. You are a cultist. Go back to sexting with ELIZA.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        VSCode autocomplete is literal AI now, they don’t have intellisense

        Others will follow, making “no AI” software really difficult to prove

          • lepinkainen@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            Right now, but we can all see where it’s heading

            Then you need to make sure 200 devs disable all AI features or get disqualified from awards with “no AI” clauses

  • ToiletFlushShowerScream@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    If there was an AI that licensed every bit of art/code/etc that it trained on, then I think I would be fine if they used it. BUT, I’d never think their final product was ever a more than just a madlibs of other people’s work, cobbled together for cheap commercial consumption. My time is worth something, and I’m not spending a minute of it on AI generated crap when I could be spending it on the product of a true author, artist, coder, craftsman. They deserve my dollar, not the AI company and their ai-using middle man who produced shit with it.

    • absentbird@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I mean they mostly used it for textures, right? Those are often generated from a combination of noise and photography, it’s not like they were building the game out of lego bricks of other people’s art.

      I don’t see how it’s significantly different than sampling in music, it’s just some background detail to enhance the message of the art.

      Obviously modern AI is a nightmare machine that cannot be justified, but I could imagine valid artistic uses for the hypothetical AI you described.

      • MrVilliam@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        When you sample in music, you get the original artist’s permission or you get fucking sued. If the AI used were trained on a licensed library catalogue, then sure. Media companies historically would buy sample licenses to use for their sound effects in movies, video games, etc. so AI could essentially just do that, but put the encyclopedia of samples in a blender of training to modulate that shit to make something somewhat “new” to be used. Original artists get royalties, users get something customized without having to hire sounds engineers to make those adjustments, and consumers get good products.

  • Jayjader@jlai.lu
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    I’m not surprised they ultimately felt like GenAI isn’t useful to what they’re trying to do. Game dev has known about this type of generation for a while now (see https://en.wikipedia.org/wiki/Model_synthesis and the sources linked at the bottom) and it takes a lot of human effort to curate both the training data and the model weights to end up with anything that feels new and meaningful.

    If I shuffle a deck of 52 cards, there is a high chance of obtaining a deck order that has never occurred before in human history. Big whoop. GenAI is closer to sexy dice anyways - the “intelligent work” was making sure the dice faces always make sense when put together and you don’t end up rolling “blow suck” or “lips thigh”.

    It’s very impressive that we’re able to scale this type of apparatus up to plausibly generate meaningful paragraphs, conversations, and programs. It’s ridiculous what it cost us to get it this far, and just like sexy dice and card shuffling I fail to see it as capable of replacing human thought or ingenuity, let alone expressing what’s “in my head”. Not until we can bolt it onto a body that can feel pain and hunger and joy, and we can already make human babies much more efficiently than what it takes to train an LLM from scratch (and they have personhood and thus rights in most societies around the world).

    Even the people worried about “AI self-improving” to the point it “escapes our control” don’t seem to be able to demonstrate that today’s AI can do much more than slow us down in the long run; this study was published over 5 months ago, and they don’t seem to have found much since then.

    • woelkchen@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Nature is healing.

      Nah, they’re lying. They’ll just cover their tracks better in the future.