MEMRI Daily Brief No. 805
The following is an op-ed for the Forbes Nonprofit Council by MEMRI Executive Director Steve Stalinsky, Ph.D. Titled “AI Industry Needs Standards To Protect Against Bad Actors,” it was published June 18, 2025 by Forbes.[1]
Exemplifying the legitimate fears among many who study AI and warn of its dangers, on May 23, Anthropic revealed that in a test of its new Claude Opus 4, the system chose to blackmail one of its engineers to prevent him from shutting it down. Early versions of the system had been found to comply with dangerous instructions and even expressed a willingness to assist with terror attacks. Anthropic said that these problems had largely been resolved in the current version but also openly acknowledged, on May 22, that in internal testing, Opus 4 had performed more effectively than prior models at helping users produce biological weapons.
While Anthropic should be commended for telling the truth, eliminating such assistance for terrorism should not be left up to any one party.
As AI technology has advanced meteorically, impacting society both positively and negatively, and as it becomes more commonly used across all sectors, cases like that of Opus 4 can be expected to become commonplace if nothing is done.
Concern Grows Over AI’s Use By Extremist And Terrorist Groups
Of growing concern is the wholesale adoption of AI by extremist and even terrorist groups, for outreach, recruitment and incitement and for planning and supporting actual attacks. AI could soon become a vital weapon in their online arsenal and a disruptor in both mainstream online spaces and on their own channels.
AI use in terror attacks could also be a challenge for law enforcement unless swift action is taken. In my research, I have found that groups and individuals are talking about using AI to plan terror attacks, to make weapons of mass destruction, to organize armed uprisings to overthrow the government and more. Others have discussed using AI for developing weapons systems, including drones and self-driving car bombs.
Recent examples include the man who killed 14 and wounded dozens on Bourbon Street in New Orleans on New Year’s Day 2025; he used AI-enabled Meta smart glasses in preparing and executing the attack. They also include a teen in Israel who consulted ChatGPT before entering a police station with a weapon on March 5 and trying to stab a policeman.
While Platforms Already Ban Such Activity, Enforcement Is A Challenge
Having spent over two decades heading a nonprofit whose mission includes supporting the U.S. government in counterterrorism and law enforcement as well as assisting the tech community, and in particular strategizing how to deal with terrorist use of the internet, social media and other technologies, I have for years been calling for tech companies and their CEOs to come up with best practices and industry standards to fight terrorist use of their platforms. The time for the AI industry to do so is now.
By this point, the AI industry should be capable of coming up with strategies to keep terrorists and other criminals from using their products and for companies to collaborate on industry standards to keep terrorists out. But they must be committed to doing so.
Most of these platforms ban such activity in their terms of service—but are these policies being enforced? For example, OpenAI’s ChatGPT states in its Terms of Use that a user “may not use our Services for any illegal, harmful, or abusive activity.” Its CEO, Sam Altman, said in October 2024 at a “fireside chat” at Harvard: “Should GPT-4 generate hate speech? Fairly easy for us to say no to that.”
The Acceptable Use Policy of xAI’s Grok states: “Do not harm people or property … [or] Critically harm or [promote] critically harming human life (yours or anyone else’s) … [or] develop bioweapons, chemical weapons, or weapons of mass destruction.”
Perplexity AI’s Terms of Service prohibit its use “in a manner that is obscene, excessively violent, harassing, hateful, cruel, abusive, pornographic, inciting, organizing, promoting or facilitating violence or criminal activities.”
Microsoft, Google and DeepSeek all similarly ban such activities in their terms of service.
Each one of these companies has good—and often overlapping—ideas on dealing with extremism and terrorism, and these could be the basis for a starting point for creating industry standards to follow. But enforcing these terms of service poses a challenge, as was seen from the earliest days of the industry.
Microsoft’s chatbot Tay, released in March 2016, was shut down within 24 hours after it tweeted pro-Hitler messages—yet today, the danger posed by hate groups’ use of AI has increased by orders of magnitude. One history chat app let users chat with simulated historical figures, including Hitler and Nazi propagandist Joseph Goebbels. Currently, users can still create extremist content on many AI platforms—despite terms of service not allowing it.
OpenAI CEO Sam Altman explained in an interview in 2023 that “time” is needed “to see how [the technology is] used,” that “we’re monitoring [it] very closely.” In his testimony before a Senate committee on May 8, Altman was asked by Senator Jacky Rosen whether he would consider collaborating with civil society to create a standard benchmark for AI related to antisemitism, to be used subsequently for other forms of hate, as well. Altman replied that “of course, we do collaborate with civil society on this topic and we are excited to continue to do so” but that “there will always be some debate and the question of free speech in the context of AI is novel.”
Industry And Government Leaders Must Step In – Now
The need for standards and guidelines for AI technology is clear. But to date, there have been no official moves to examine criminal and extremist use of this technology. While the National Institute of Standards and Technology has reportedly developed frameworks for responsibility in AI use, few seem to know about them.
Early on, AI leaders promised to address the spread of hate and extremism on their platforms. But they have failed to deliver, and this alone is why I believe companies themselves cannot be trusted to deal with terrorists using their products. As we saw with Claude Opus 4, AI has the potential to help facilitate unimaginable terrorist attacks. Government and industry together must act fast to prevent this from happening.
AUTHOR
Steven Stalinsky
Steven Stalinsky is Executive Director of MEMRI.
REFERENCE:
[1] Forbes.com/councils/forbesnonprofitcouncil/2025/06/18/ai-industry-needs-standards-to-protect-against-bad-actors, June 18, 2025.
EDITORS NOTE: This MEMRI column is republished with permission. ©All rights reserved.
The post AI Industry Needs Standards To Protect Against Bad Actors appeared first on Dr. Rich Swier.
Click this link for the original source of this article.
Author: Middle East Media Research Institute
This content is courtesy of, and owned and copyrighted by, https://drrichswier.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.