• NewNewAugustEast@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    Here is the really scary part: so many doctors were using Google lately anyways… Now they are turning to medical llms.

  • Prove_your_argument@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    I feel like this is the self driving car thing again.

    How often are human doctors wrong in their diagnoses?

    How often are LLM doctors wrong in their diagnoses?

    I’m pretty sure the former is close to 75%, and the latter substantially less. I’ve heard of so many people go to doctor after doctor and not get the right diagonsis or treatment for whatever they have going on, and it takes 5+ to find the one who figures it out and gets them treated.

        • KT-TOT@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 days ago

          You’re claiming that LLMs are superior to doctors in… Accurate diagnosis rates for unspecified conditions?

          Helluva claim without offering even the suggestion of evidence. Not really sure a “no ur stupid” is needed, you’re not exactly making a claim based in established fact or reality.

          Also uh, self driving cars lmao.

              • Prove_your_argument@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 days ago

                Nah, I just consistently put more effort in than you clowns.

                https://pmc.ncbi.nlm.nih.gov/articles/PMC11263899/

                ChatGPT4 rated higher than physicians at taking text input and getting a diagnosis in that study.

                Here’s a completely different one. 90% accuracy for chatgpt, 74% for doctors not using LLM tool. https://www.advisory.com/daily-briefing/2024/12/03/ai-diagnosis-ec

                So chatgpt wrong 10% of the time, doctor wrong 26% of the time. 2.6x worse failure rate by real docs… for that one anyway.

                It’s just a matter of time for medical diagnosis to be done by LLMs first, and then simply be reviewed by a doc for sanity because humans “don’t trust” technology.

                So here, you literally just prove you’re an asshat, and I brought data.

                • AbeilleVegane@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 days ago

                  "I feel like this is the self driving car thing again.

                  How often are human doctors wrong in their diagnoses?

                  How often are LLM doctors wrong in their diagnoses?

                  I’m pretty sure the former is close to 75%, and the latter substantially less. I’ve heard of so many people go to doctor after doctor and not get the right diagonsis or treatment for whatever they have going on, and it takes 5+ to find the one who figures it out and gets them treated."

                  This you?

                • wucking_feardo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 days ago

                  100 people in the first study, and the authors specifically indicate that the high quality results are dependent on adequate input by the resident physicians. When the diagnosis was obtained by self reporting, accuracy dropped to 50%.

                  So I do not believe LLMs are able to replace doctors.

    • fluffy@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      Well, I won’t make up numbers (like you did) but two things

      • wrong diagnosis are a thing, won’t argue about that
      • you mostly hear the people who complain about false diagnosis. Just because a group of people is “loud” does not mean they are in the majority.
      • Prove_your_argument@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        The thing that LLMs are great at is taking a LOT of datapoints and coming to a conclusion based on all of them.

        Humans can look at a few but get overwhelmed.

        if you feed a ton of diagnostic data including past incidents, blood test, perhaps DNA tests, i’m pretty sure LLMs will be able to better figure out a diagnosis than a doctor using traditional methods.

        When users self-diagnose, they’re often wrong, because they don’t know what the fuck they’re doing. Garbage in garbage out regardless of the entity trying to process it.

        This study is one that put doctors against a LLM, 90% accuracy for chatgpt, 74% for doctors not using LLM tool. https://www.advisory.com/daily-briefing/2024/12/03/ai-diagnosis-ec

        So chatgpt wrong 10% of the time, doctor wrong 26% of the time. 2.6x worse failure rate by real docs… for that one anyway. The better the data for chatgpt, the better it’s diagnosis. Humans probably won’t get much better, but LLMs? I bet they will.

        We’re likely to have an intermediary step where HCPs handle the symptoms, testing, etc and then it’s fed into a medical focused LLM. The LLM will output potential diagnosis for a doctor to review for sanity, even though the doctor is probably less accurate it will make everyone feel better, and then the doc will slap a diagnosis on their profile.

        LLMs will be infinitely better than humans at figuring out drug interactions (it’s just a big fucking database), allergies (they can’t forget you’re allergic to NSAIDs like my wife is, and who routinely has been given them by HCPs who fuck up.) Who knows what else.

    • Prove_your_argument@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      and if the argument is “Bbbuut the LLM was wrong once and someone DIED!”

      The comparison is the human being wrong over and over and over and over to the result of countless deaths. Malpractice lawsuits must be rare compared to the amount of mistakes that are made, simply because it’s difficult to get to the point where you win, and extremely costly if you fail the suit.

      We already have people posting on social media for medical advice. LLMs just can’t be worse than that.

      • greasewizard@piefed.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 days ago

        You can at least sue a doctor for malpractice if they make a mistake. If you follow medical advice from a chatbot and you die, who is liable?

        Large Language Models were built to rewrite emails, not provide valid medical advice

        • Prove_your_argument@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          If you post on reddit asking for advice, and you die after following the advice despite there being no claims of anyone being a doctor, who does someone sue?

          IMO shouldn’t need disclaimers stating that absolutely everyone and everything is not a lawyer, is not a HCP, etc, etc. It’s just a given.

          If you google something and just blindly do what the first result says, do you have a case against them too?