BEVERLY HILLS, Calif. — Artificial intelligence is increasingly infused into many aspects of health care, from transcribing patient visits to detecting cancers and deciphering histology slides. While AI has the potential to improve the drug discovery process and help doctors be more empathetic towards patients, it can also perpetuate bias, and be used to deny critical care to those who need it the most. Experts have also cautioned against using tools like generative AI for initial diagnosis.
Brian Anderson is the CEO of the recently launched Coalition for Health in AI, a nonprofit established to help create what he calls the “guidelines and guardrails for responsible AI in health.” CHAI, which is made of academic and industry partners, wants to set up quality assurance labs to test the safety of health care AI products. He hopes to build public trust in AI and empower patients and providers to have more informed conversations around algorithms in medicine. On Wednesday, CHAI shared its “Draft Responsible Health AI Framework” for public review.
But lawmakers have concerns over whether CHAI, whose members include AI heavy weights like Microsoft and Google, is essentially the AI industry policing itself, and other experts have supported alternative AI regulatory frameworks that are more localized.
Click this link for the original source of this article.
Author: Nicholas St. Fleur
This content is courtesy of, and owned and copyrighted by, https://www.statnews.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.