In the news
In the US, the White House recently published its Blueprint for an AI Bill of Rights that aims to prevent, as the document’s foreword notes, “the use of technology, data and automated systems in ways that threaten the rights of the American public.”
As this article notes, artificial intelligence touches many aspects of modern life, from financial and social services to public safety and government benefits—but it is healthcare that the federal document is primarily concerned with.
The nonbinding blueprint is built on five principles (safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration and fallback). It also follows on the heels of the European Union’s proposed AI Act, which would assign all applications of AI to one of three risk categories.
“As AI is rolled out in healthcare to help make some of the highest-stake decisions there are,” says this MIT Tech Review article, “it’s more crucial than ever to critically examine how these systems are built.”
The Sakks Tech take
“Despite growing technological maturity, AI’s ethical dimensions remain a work in progress,” says Mary K, at Sakks, in a recent article. He outlines three ways to address and improve AI in life sciences.
First, he notes, practitioners must recognize that AI tools lack transparency, and so humans must rigorously test their conclusions. Sometimes, people simply cannot understand the reasoning behind algorithmic results; this has been dubbed the black-box problem of neural networks.
Next, humans must question the techniques used to arrive at AI-based decisions, because they can be prone to inaccuracies or biased outcomes due to embedded or inserted bias. Human skepticism, he adds, is a “sharp tool in searching out such bias.”
Finally, AI should be deployed only where it is fit for the purpose. Today, AI is reasonably mature, “where structured or uniform data standards make machine learning possible, such as natural language generation for clinical study reports, patient narratives for submission data and medical coding.” But it remains in the proof-of-concept stage in clinical data review and clinical data management. “AI is still not ready to be the final arbiter of decisions with a direct impact on an individual’s care,” he writes.
The important tools life sciences practitioners must bring to this work are skepticism and humility—two very human qualities.