When AI flags the ruler, not the tumor — and other arguments for abolishing the black box (VB Live)


AI helps health care experts do their jobs efficiently and effectively, but it needs to be used responsibly, ethically, and equitably. In this VB Live event, get an in-depth perspective on the strengths and limitations of data, AI methodology and more.

Hear more from Brian Christian during our VB Live event on March 31.

Register here for free.


One of the big issues that exists within AI generally, but is particularly acute in health care settings, is the issue of transparency. AI models for example, deep neural networks have a reputation for being black boxes. Thats particularly concerning in a medical setting, where caregivers and patients alike need to understand why recommendations are being made.

Thats both because its integral to the trust in the doctor-patient relationship, but also as a sanity check, to make sure these models are, in fact, learning things theyre supposed to be learning and functioning the way we would expect, says Brian Christian, author of The Alignment Problem, Algorithms to Live By and The Most Human Human.

He points to the example of the neural network that famously had reached a level of accuracy comparable to human dermatologists at diagnosing malignant skin lesions. However, a closer examination of the models saliency methods revealed that the single most influential thing this model was looking for in a picture of someones skin was the presence of a ruler. Because medical images of cancerous lesions include a ruler for scale, the model learned to identify the presence of a ruler as a marker of malignancy, because thats much easier than telling the difference between different kinds of lesions.

Its precisely this kind of thing which explains remarkable accuracy in a test setting, but is completely useless in the real world, because patients dont come with rulers helpfully pre-attached when [a tumor] is malignant, Christian says. Thats a perfect example, and its one of many for why transparency is essential in this setting in particular.

At a conceptual level, one of the biggest issues in all machine learning is that theres almost always a gap between the thing that you can readily measure and the thing you actually care about.

He points to the model developed in the 1990s by a group of researchers in Pittsburgh to estimate the severity of patients with pneumonia to triage inpatient vs outpatient treatment. One thing this model learned was that, on average, people with asthma who come in with pneumonia have better health outcomes as a group than non-asthmatics. However, this wasnt because having asthma is the great health bonus it was flagged as, but because patients with asthma get higher priority care, and also asthma patients are on high alert to go to their doctor as soon as they start to have pulmonary symptoms.

If all you measure is patient mortality, the asthmatics look like they come out ahead, he says. But if you measure things like cost, or days in hospital, or comorbidities, you would notice that maybe they have better mortality, but theres a lot more going on. Theyre survivors, but theyre high-risk survivors, and that becomes clear when you start expanding the scope of what your model is predicting.

The Pittsburgh team was using a rule-based model, which enabled them to see this asthma connection and immediately flag it. They were able to share that the model had learned a possibly bogus correlation with the doctors participating in the project. But if it had simply been a giant neural network, they might not have known that this problematic association had been learned.

One of the researchers on that project in the 1990s, Rich Caruana from Microsoft, went back 20 years later with a modern set of tools and examined the neural network he helped developed and found a number of equally terrifying associations, such as thinking that being over 100 was good for you, or having high blood pressure was a benefit. All for the same reason that those people were given higher-priority care.

Looking back, Caruana says thank God we didnt use this neural net on patients, Christian says. That was the fear he had at the time, and it turns out, 20 years later, to have been fully justified. That all speaks to the importance of having transparent models.

Algorithms that arent transparent, or that are biased, have resulted in a variety of horror stories, which have led to some saying these systems have no place in health care, but thats a bridge too far, Christian says. Theres an enormous body of evidence that shows that when done properly, these models are an enormous asset, and often better than individual expert judgments, as well as providing a host of other advantages.

On the other hand, explains Christian, some are overly enthusiastic about the embrace of technology, who say, lets take our hands off the wheel, let the algorithms do it, let our computer overlords tell us what to do and let the system run on autopilot. And I think that is also going too far, because of the many examples weve discussed. As I say, we want to thread that needle.

In other words, AI cant be used blindly. It requires a data-driven process of building provably optimal, transparent models, from data, in an iterative process that pulls together an interdisciplinary team of computer scientists, clinicians, patient advocates, as well as social scientists that are committed to an iterative and inclusive process.

That also includes audits once these systems go into production, since certain correlations may break over time, certain assumptions may no longer hold, and we may learn more the last thing you want to do is just flip the switch and come back 10 years later.

For me, a diverse group of stakeholders with different expertise, representing different interests, coming together at the table to do this in a thoughtful, careful way, is the way forward, he says. Thats what I feel the most optimistic about in health care.


Hear more from Brian Christian during our VB Live event, In Pursuit of Parity: A guide to the responsible use of AI in health care on March 31.

Register here for free.

Presented by Optum


Youll learn:

  • What it means to use advanced analytics responsibly
  • Why responsible use is so important in health care as compared to other fields
  • The steps that researchers and organizations are taking today to ensure AI is used responsibly
  • What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians

Speakers:

  • Brian Christian, Author, The Alignment Problem, Algorithms to Live By and The Most Human Human
  • Sanji Fernando, SVP of Artificial Intelligence & Analytics Platforms, Optum
  • Kyle Wiggers, AI Staff Writer, VentureBeat (moderator)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *