cross-posted from: https://beehaw.org/post/24287458

I don’t usually keep the author’s name in the suggested hed, but here I think he’s recognizable enough that it adds value.

I am a science-fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.

What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.

Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can “see the future”.

Then there are science-fiction fans who believe that they are reading the future. A depressing number of those people appear to have become AI bros. These guys can’t shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.

That’s something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn’t understand crypto. And then, when I made it clear that I did understand crypto, they insisted that I must be a paid shill.

This is literally what happens when you argue with Scientologists, and life is just too short. That said, people would not stop asking – so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”

  • nettie@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    Take radiology: there is some evidence that AI can sometimes identify solid-mass tumors that some radiologists miss. Look, I’ve got cancer. Thankfully, it’s very treatable, but I’ve got an interest in radiology being as reliable and accurate as possible.

    Let’s say my hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 X-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 X-rays per day. That’s fine, we just care about finding all those tumors.”

    If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire nine out of 10 of your radiologists, saving $20m a year. You give us $10m a year, and you net $10m a year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed – and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

    “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.”