‘Lazy’ AI: UW researchers find that tech can misdiagnose COVID-19 by taking shortcuts


Left to right: University of Washington researchers Alex DeGrave, Su-In Lee and Joseph Janizek. (University of Washington Photo)

The future use of artificial intelligence in medical contexts could be beneficial in improving efficiency, but a new University of Washington research study published in Nature found that AI relied on shortcuts rather than actual medical pathology in diagnosing COVID-19.

The researchers examined chest X-rays used to detect COVID-19. They found that the AI relied more on specific datasets than significant medical factors to predict whether a patient had contracted the virus.

However, its unlikely the machines subject of this research were used widely in a medical setting, according to a UW report on the study. One of the models, COVID-Net, was deployed in multiple hospitals, but Alex DeGrave, one of the lead authors of the study, said in the report it is unclear if the machines were used for medical or research purposes.

These shortcuts are what researchers DeGrave, Joseph Janizek, and Su-In Lee referred to as AI being lazy.

AI finds shortcuts because it is trained to look for any differences between the X-rays of healthy patients and those with COVID-19, the research team told GeekWire in an email. The training process doesnt tell the AI that it needs to look for the same patterns that doctors use, so the AI uses whatever patterns it can to simply increase accuracy of discriminating COVID-19 from healthy.

When a doctor uses a chest X-ray to determine a COVID-19 diagnosis, they already have information on the patient like exposure and medical history, and they expect new information from the X-ray.

If a doctor assumes that an AI is reading an X-ray and providing new information, but the AI is actually just relying on the same information the doctor had instead, this can be a problem, the research team said.

When AI can be trusted to make decisions for the right reasons, it could benefit the medical community by improving efficiency and patient outcomes, the team said. Similarly, it could reduce physicians workload and provide diagnostic support in low resource areas.

However, each new AI device should be thoroughly tested to ensure that it indeed offers benefits, the team said. To help achieve useful, beneficial AI systems, researchers need to test AI more rigorously and refine the explainable AI technologies that can help in testing.

The study found that better data data that contains fewer problematic patterns that AI could learn prevented the AI from using many shortcuts. Similarly, it is possible to penalize an AI for using shortcuts so it can focus on relevant data.

The team recommended that AI should be tested on new data from hospitals it has never seen and use techniques from the field of explainable AI to determine what factors influence the AIs decision.

For medical providers, we recommend that they review the studies done on an AI device before fully trusting it, and that they remain skeptical of these devices until clear medical benefits are shown in well-designed clinical trials, the team said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *