Artificial intelligence has the potential to revolutionize how drugs are discovered and change how hospitals deliver care to patients. But AI also comes with the risk of irreparable harm and perpetuating historic inequities.
Would-be health care AI regulators have been spinning in circles trying to figure out how to use AI safely. Industry bodies, investors, Congress, and federal agencies are unable to agree on which voluntary AI validation frameworks will help ensure that patients are safe. These questions have pitted lawmakers against the FDA and venture capitalists against the Coalition for Health AI (CHAI) and its Big Tech partners.
The National Academies on Tuesday zoomed out, discussing how to manage AI risk across all industries. At the event — one in a series of workshops building on the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework — speakers largely rejected the notion that AI is a beast so different from other technologies that it needs totally new approaches.
Click this link for the original source of this article.
Author: Brittany Trang
This content is courtesy of, and owned and copyrighted by, https://www.statnews.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.