cross-posted from: https://lemmy.zip/post/49954591

“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.

To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.

“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Billions of dollars are spent, unimaginable amount of power is used, ton of programmers are fired, million of millions code is copied without license and credit, nasty bugs and security issues are added due to trusting the ai system or being lazy. Was it worth it? Many programmers get disposable as they have to use ai. That means “all” programmers are the same and differ only in what model they use, at least that’s the future if everyone is using ai from now on.

    Ai = productivity increases, quality decreases… oh wait, Ai = productivity seems to increase, quality does decrease

  • droans@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    It’s been clear that the best use of AI in a professional environment is as an assistant.

    I don’t want something doing my job for me. I just want it to help me find something or to point out possible issues.

    Of course, AI isn’t there yet. It doesn’t like reading through multiple large files. It doesn’t “learn” from you and what you’re doing, only what it’s “learned” before. It can’t pick up on your patterns over time. It doesn’t remember what your various responsibilities are. If I work in a file today, it’s not going to remember in a month when I work on it again.

    And it might never get there. We’ve been rapidly approaching the limits of AI with two major problems. First, scaling is becoming exponential. Doubling the training data and computing resources won’t produce a model that’s twice as good. Second, overtraining is now a concern. We’re discovering that models can produce worse results if they receive too much training data.

    And, obviously, it’s terrible for the environment and a waste of resources and electricity.

  • kidney_stone@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I have to say that I am surprised by how many people are bashing on AI coding.

    I personally use AI tools like cursor and claude code to help me with code, and I have to say that they are incredibly helpful with managing huge codebases and fixing repetitive elements. Obviously, I have to know a lot about coding and system design and whatnot, and I have to write many instructions, so it can’t ‘replace programmers’ or anything. But in the hands of those who already know their stuff, it really does speed things up significantly.

    I have no idea how other people are using it that is inspiring all these reports on how awful AI coding is. Do non-programmers just enter Claude and write “make program!!!” and then expect it to work?

  • flatbield@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I have a friend that is a professional programmer. They think AI will generate lots of work fixing the shit code it creates. I guess we will see.

      • blarghly@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Idk, that was basically 90% of my last job. At least the ai code will be nicely formatted and use variable names longer than a single character.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Oh yes. Get the AI to refactor and make pretty.

          But I’ve just spent 3 days staring something that was missing a ().
          However, I admit that a human could have easily made the same error.

          • python@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Something like Stackoverflow is probably the biggest source of code to train a LLM on, and since it’s based around posting code that almost works but you got some problem with it, I’m absolutely not surprised that the LLMs would pick up the habit of making the same subtle small mistakes that humans make.