(Just The News)—Artificial intelligence has become a standard tool in most industries, including health care.
Yet, institutions are still working to draw lines around acceptable use within their specific contexts.
For instance, schools may tell students they can use the technology to craft an outline but not to write a paper. For mental health care, one legislator has suggested that the technology should go no further than administrative tasks.
Rep. Melissa Shusterman, D-Paoli, has introduced legislation into the Pennsylvania state House that would prohibit mental health professionals from using AI for any tasks beyond the administrative workload they can help to streamline.
“As artificial intelligence (AI) is increasingly being used in mental and behavioral health care settings, the potential risks associated with providing inaccurate, biased, or inconsistent medical recommendations can undermine patient care, resulting in substandard services and possible harm to the patient,” wrote Shusterman in a memo supporting the bill.
As it stands, many professionals use AI to assist with scheduling, accounting, notetaking, and emailing. Today’s business software across industries typically includes varying levels of AI, often occurring automatically, like phrasing and grammar suggestions in an email or word processing app.
Clinical uses, however, offer a brave new world of medical care. Medical schools are incorporating AI tools into their educational framework, heralding its potential uses from diagnosis to monitoring and therapeutics, but states are still working out the specifics of regulation. Blanket prohibitions could derail or delay innovations some hope will fundamentally change our relationship to disease.
For its part, the World Health Organization has outlined its ethical concerns around AI while detailing core principles for its use. In the U.S., thinking on the matter at the federal level has shifted drastically with different administrations, leaving an open field for experimentation.
Shusterman noted that there are no FDA approved chatbots.
Outside of clinical settings, there has been growing alarm about increased personal use of chatbots for therapeutic purposes.
The personalized nature of the relationship between user and bot has led to mixed results. While some adopters find AI to be a useful tool to talk their thoughts out, instances of self-harm and even what has been dubbed “AI psychosis” have experts wondering if the benefits are worth the risks.
In response to concerns, ChatGPT parent company OpenAI said that they are scanning user conversations for signs of trouble. When spotted, humans review the case and take appropriate action, whether that’s suspending an account or reporting a user to the authorities.
Shusterman’s legislation won’t stop incidents like this from happening for users working directly with AI, but it can stop mental health care professionals from becoming overly reliant on technology to answer tough questions. Diagnostic criteria are often complex, and there’s overlap in symptoms and behaviors for a wide variety of mental disorders.
“Mental health care requires a personalized approach based on human emotion, education and professional experience, as well as standards of ethics,” wrote Shusterman. “AI can be permitted to function as a supplementary administrative tool but not replace the expertise of health care providers.”
Click this link for the original source of this article.
Author: Christina Lengyel, The Center Square
This content is courtesy of, and owned and copyrighted by, https://americafirstreport.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.