“Miron, these AI tools aren’t doing the things we want them to do”
I love separating hype from what’s real and a recent dinner with a Yale-based radiologist helped me do just that.
“The AI tools we have aren’t reliable enough for me to trust their results so I still have to go through the images for my own diagnosis. It’s rare that they pick something up that me and my colleagues missed like a pulmonary embolism, a fracture or a brain bleed. Instead of focusing on diagnosis, there are so many things we do that we hate that helps inform diagnosis, like annotating the spine or measuring pulmonary nodules, which is still a very manual process. Having tech do THAT would be a game changer”
A few takeaways from that delectable chat:
🚀 Digital health tools often go for the main ‘Job-to-be-Done (JTBD)’ like diagnosis. Instead, there are sub-JTBD (like annotation of the spine) that helps speed up the main JTBD (like diagnosis by a human)
💨 Focusing on the less 'sexy' and often-manual JTBD might be a lower burden of proof, tackle a burning unmet need and might get adopted into the organization faster for business impact (e.g., 'evolution versus revolution')
🤖. There’s lots of lip service paid to "augmenting and not replacing doctors" with AI - focusing on helping them do the scutwork instead of the highest value JTBD, by definition, will augment their workflow and not make them do double-work (eg., doing their own diagnosis and then comparing with an AI tool)
🍷 Radiologists like expensive wine... and why shouldn't they
Beyond radiology, I often see this as the key adoption barrier for digital health both in healthcare and Pharma - a focus on the wrong Jobs-to-be-Done, which is often technology versus problem-driven.
The good news is that a probing conversation with the end user (including a Pharma's BU Head or Brand Director) should identify a slew of impact-driving JTBDs to help with getting organizational buy-in, investment and eventual adoption
Otherwise, corporate initiatives become 'nice to haves'
🥂
Comments