On July 23, 2025, the Trump Administration published Winning the Race: AMERICA’S AI ACTION PLAN (“AI Action Plan”).[1] Among the goals of the AI Action Plan is the elimination of inappropriate bias[2] and false information in the government’s AI systems. The corruptions caused by inappropriate biasing, which may be introduced at any number of stages in what has been called “the AI pipeline,” lead to systems and outputs that are unreliable and, in some instances, injurious.[3] A core concern regarding the outputs of such flawed systems is that the outputs can contain AI hallucinations – a phenomenon in which the system “in a large language model (LLM), often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”[4] The roots of this concern are diverse and run deep, but a conspicuous and widely publicized example of hallucinating AI is provided by Google’s attempt to avoid bias and promote diversity in the outputs of its generative AI Gemini model. Google’s tampering with historical truths was ostensibly well-intentioned and harmless, but it pointed out some underlying AI-related concerns that have more serious and potentially injurious consequences. These consequences arise when a flawed AI system’s output is being relied on to make decisions that affect people’s lives. The following text briefly recaps the Google Gemini matter and then discusses aspects of the AI Action Plan that are intended to identify and eliminate inappropriate bias and falsity in government AI systems.
Prelude In Popular Culture
Sometimes, harmless “glitches” and obvious anomalies in AI outputs can help us to develop insights into more serious AI-related challenges and issues. For example, in late February of 2024, Google’s generative AI Gemini chatbot churned out images of Asian Vikings, Black Nazis, and Native American popes.[5] Although, from an aesthetic point-of-view, many of the images had merit, the problem was that Gemini appeared to be providing those results as “factual.” [6] On one level, Gemini’s approach to promoting diversity through falsity was amusing – despite the arguably deplorable state of education in the United States and elsewhere, most users immediately understood that the images were fantasy-driven rather than historically accurate. We recognized that these outputs were awash in lysergic contra-reality. But the underlying challenges that the “untrue” images exposed proved disturbing for many. It was easy to dismiss the cavalcade of ridiculous images as yet another validation of the maxim: “Nonsense In – Nonsense Out.” However, the Gemini situation also inspired thought and, in some cases, apprehension, about the ability of AI system creators and providers to have inappropriate effects on any number of serious issues and outcomes – outcomes that can have adverse “real world” effects on our liberties, security, effectiveness, economy, and wellbeing. Some of these concerns are:
(1) How do we identify and flag AI outputs that are simply cloaked attempts to spread false news and narratives that sabotage informed and rational decision-making in political, social, industrial, and other contexts?
(2) What are the ways in which AI systems can be manipulated and “weaponized” to cause misperception and injury?
(3) How vulnerable are our critical systems – healthcare, military, security, education, etc. – to manipulation by intentionally or unintentionally flawed AI?
(4) How do Freedom of Speech concerns affect the analysis (if at all), and do these concerns vary depending on the provider of the system and the intended use of the AI output?
(5) Who is to be held responsible (and potentially liable) for creating and providing flawed AI – and are our defamation, privacy, product liability, and other tort laws sufficient to limit and address these issues?
(6) Are current federal and state laws that apply to digital communications and content sufficient to address the emerging concerns about false and inappropriately manipulative AI-generated content?
(7) What are the security and military uses of intentionally flawed AI as part of the arsenal in asymmetrical warfare and in other conflicts?
An ancillary concern might be whether for-profit public corporations can legitimately focus on social engineering efforts that divert corporate resources into areas that do not generate profits or actually cause damage to profitability.[7]
After a storm of media ridicule erupted over the absurd Gemini output of historical figures that did not exist, Google published the following acknowledgement:
We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.[8]
Google’s responsible approach after the problem became manifest is laudable – but we also owe Google a further debt of gratitude for its harmless (and amusing) demonstration of what can go wrong when AI hallucinations are: (1) generated without a full appreciation of unintended consequences; (2) provided as or mistaken for truth, (3) used to promote a false point or narrative, or (4) used to avoid conveyance of objective truth. An additional and fundamental question is: Who is responsible for the decision-making that leads to the intentional or unintentional promulgation of damaging falsehoods created by AI? From one perspective, including notions about allocations of responsibility, the lyric to a classic John Foxx electronic/synthesizer composition comes to mind: “There’s no-one driving.”[9] Really, who is in control of this technological morass? How do we find out who is engineering the mechanisms that so fundamentally affect our lives? On the other hand, “pay no attention to that man behind the curtain,” also seems to come into play.[10] A lot of us have already accepted that there is no identifiable author behind AI-driven information, misinformation, and disinformation – unless we have sufficient resources or representatives who are willing to look for “that man behind the curtain” (and maybe hold him accountable). Nonetheless, as the creators, mechanisms and effects of AI misrepresentations and errors become more generally understood through up-to-date education, popular culture, and responsible journalism – or at least made more readily identifiable via these avenues – the prospects for effective preventative and curative actions increase.[11]
America’s AI Action Plan
On July 23, 2025, the Trump Administration published its AI Action Plan.[12] As stated in the release documents, “[t]he Plan identifies over 90 Federal policy actions across three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security – that the Trump Administration will take in the coming weeks and months.”[13] One of the actions promoted in the AI Action Plan is “[u]pdating Federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.”[14] As posited in the AI Action Plan, the elimination of U.S. government policy distortions of AI data pools (and outputs) protects free speech rather than confines or eliminates it.[15] In short, the drive is to ensure that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.[16] The resulting AI outputs are therefore empirically accurate rather than captive to (potentially transitory and/or biased) policy and lobbying influences.
One avenue of analysis leads to questions about whether a provider of AI services can, under Freedom of Speech principles, stamp those services with, for example, the provider’s social or political point-of-view. The answer is probably “yes,” at least where the bias is clearly indicated to the user – but the user also has the freedom to reject such systems and to base the user’s decision-making on more objectively accurate systems and services. In general, corporate speech enjoys (sometimes qualified) First Amendment protection.[17] Whether a company’s generative AI offerings are “corporate speech” or the personal speech of the system’s creators, is an interesting, albeit in this context, inconsequential inquiry. So too, there is a school of thought that the generative AI output is not speech at all – it is a non-human construction that usually conforms to grammatical and syntactic rules and “fills in” content without any communicative intention, real “humanity” and thought processes.[18] These issues are often framed as cutting-edge inquiries into the heart and modern adaptability of our notions of Freedom of Speech. Nonetheless, irrespective these intriguing issues, if the AI-generated content is defamatory or is otherwise inaccurate or misleading, it has no “right” to be selected as the basis for “real world” decisions. Put another way, AI has no right to lie to you.[19]
Among the recommended Policy Actions to be taken in furtherance of the effort to ensure that “Frontier AI Protects Free Speech and American Values” are maintained in government AI, the following are the Policy Recommendations in the AI Action Plan:
(1) Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.
(2) Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.
(3) Led by the Department of Commerce (DOC) through NIST’s Center for AI Standards and Innovation (CAISI), conduct research and, as appropriate, publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.[20]
So, is there a legitimate fear that the government’s scrubbing of AI bias from its systems can be subject to abuse – such as the substitution of one bias for another? The answer is “yes.” Even the selection of areas and issues that will be allocated AI resources can evidence bias. Given these inherent threats and difficulties, should we simply accept that there will be bias and misdirection in the AI systems upon which the government relies – and in the government’s employment of those systems? The answer is “no.” Moreover, does AI scrubbing to eliminate bias threaten Freedom of Speech if the scrubbing drifts into censorship and substituted bias? The answer is “yes, if we let it.”
The AI Action Plan sums up its perspective on AI and Free Speech as follows:
AI systems will play a profound role in how we educate our children, do our jobs, and consume media. It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.[21]
In other words, Freedom of Speech is best optimized in the course of government decision-making and other consequential activities when the speaker (and listener) is not misled by concealed biases and false information. This statement is simple and true, but it can prove dangerous if used as a cover for undue censorship and for substitution of one bias for another. Put simply, the old adage applies to this new technological issue: “Eternal vigilance is the price of liberty.”[22] This view is also stated in the Introduction to the AI Plan:
[W]e must prevent our advanced technologies from being misused or stolen by malicious actors as well as monitor for emerging and unforeseen risks from AI. Doing so will require constant vigilance.[23]
As discussed above, ensuring that truth rather than any social engineering agenda drives the generation and practical utility of government systems’ AI output is a key goal in the AI Action Plan. Therefore, in order to ensure continual maintenance of this goal the AI Action Plan further seeks to “Build an AI Evaluations Ecosystem” wherein there will be ongoing evaluations of the reliability and performance of government AI systems.[24] Among the recommended Policy Actions to be undertaken in furtherance of this agenda are:
(1) Publish guidelines and resources through the National Institute of Standards and Technology (NIST) at the Department of Commerce (DOC), including NIST’s Center for AI Standards and Innovation (CAISI), for Federal agencies to conduct their own evaluations of AI systems for their distinct missions and operations and for compliance with existing law.
(2) Support the development of the science of measuring and evaluating AI models, led by the National Institute of Standards and Technology (NIST) at the Department of Commerce (DOC), the Department of Energy (DOE), the National Science Foundation (NSF), and other Federal science agencies.
(3) Convene meetings at least twice per year under the auspices of the National Institute of Standards and Technology’ Center for AI Standards and Innovation (CAISI) at the Department of Commerce (DOC) for Federal agencies and the research community to share learnings and best practices on building AI evaluations.
(4) Invest, via the Department of Energy (DOE) and the National Science Foundation (NSF), in the development of AI testbeds for piloting AI systems in secure, real-world settings, allowing researchers to prototype new AI systems and translate them to the market. Such testbeds would encourage participation by broad multistakeholder teams and span a wide variety of economic verticals touched by AI, including agriculture, transportation, and healthcare delivery.
(5) Led by DOC, convene the NIST AI Consortium to empower the collaborative establishment of new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development of AI.[25]
Regarding the appropriate allocation and direction of the government’s AI resources, the AI Action Plan identifies a number of critical areas, including the aggressive development and adoption of AI technologies by the US military, development and employment in next generation manufacturing, increases in AI education and training programs, focus on open-source and open-weight[26] AI models, and actions in a number of other areas that are beyond the focus of this discussion – but will be discussed in further installments of this series. Nonetheless, the intention of the AI Action Plan with regard to the elimination of inappropriate bias and false information reflects a sentiment expressed by Thomas Jefferson well over two hundred years ago: “”Whenever the people are well informed, they can be trusted with their own government; . . . whenever things get so far wrong as to attract their notice, they may be relied on to set them to rights.”[27] These sentiments, although not expressly stated, are in the foundational mix for the AI Action Plan. The emphasis in the AI Action Plan is on empiricism and practical technological utility, not biased social or political agendas. As stated in the Executive Order that preceded the AI Action Plan:
The United States has long been at the forefront of artificial intelligence (AI) innovation, driven by the strength of our free markets, world-class research institutions, and entrepreneurial spirit. To maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas. With the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans.[28]
In the final analysis, does the AI Action Plan solve the problems of inappropriately biased and otherwise flawed AI systems? The answer is “no.” But it is a good first step to beginning a process that must necessarily be ongoing and monitored by agents of public interest. So, in conclusion: This is just the beginning.
*This article is the first in a series of discussions of Winning the Race: AMERICA’S AI ACTION PLAN, issued by the Trump Administration on July 23, 2025.
**Gary Rinkerman is a Founding Partner at the law firm of Pierson Ferdinand, LLP, an Honorary Professor of Intellectual Property Law at Queen Mary University School of Law in London, a member of George Mason University’s Center For Assurance Research and Engineering, and a Senior Fellow at George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience. The views and information provided in this article are solely the work of the author and do not comprise legal advice. They are not for attribution to any entity represented by the author or with which he is affiliated or a member. All Internet citations and links in this article were visited and validated on July 27, 2025.
[1] A precursor statement to the more developed approach in the AI Action Plan can be found in President Trump’s Executive Order 14179 of January 23, 2025 (Removing Barriers to American Leadership in Artificial Intelligence) which includes provisions revoking the Biden Administration’s Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). The revocation is based, in part, on the perception that Executive Order 14110 encouraged bias and inappropriate social engineering aspirations in the development and employment of government AI. Trump’s Executive Order 141179 states that US leadership in AI innovation requires that “we must develop AI systems that are free from
ideological bias or engineered social agendas.” See Fed. Reg. Vol. 90, No. 20 Friday, January 31, 2025, pp. 8741-8742. See https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.
[2] Depending on the goals of a particular system’s application, there may be instances of constructive biasing. For example, the author of this article conducted a number of AI biasing experiments in conjunction with colleagues at New York University. One experiment conducted by the author included progressively biasing several training sets that mixed Shakespear’s Sonnet Sequence with Jonh Donne’s series of Holy Sonnets. The increase in biasing toward Donne’s works in several of the latter sets was an intentional attempt to create outputs in which Donne’s “influence” on Shakespeare increased.
[3] A basic discussion of AI biasing and its various forms can be found in Bias in AI, Chapman University Artificial Intelligence (AI) Hub, https://www.chapman.edu/ai/bias-in-ai.aspx.
[4] See What are AI hallucinations?, IBM Think Newsletter (Sept. 1, 2023), https://www.ibm.com/think/topics/ai-hallucinations.
[5] See, e.g., Field, From Black Nazis to female Popes and American Indian Vikings: How AI went ‘woke,’ The Telegraph, Feb. 23, 2024, https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/; Milmo, Google pauses AI-generated images of people after ethnicity criticism, The Guardian, Feb. 22, 2024, https://www.theguardian.com/technology/2024/feb/22/google-pauses-ai-generated-images-of-people-after-ethnicity-criticism; Barrabi, Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures, New York Post, Feb. 22, 2024, https://nypost.com/2024/02/22/business/google-pauses-absurdly-woke-gemini-ai-chatbots-image-tool-after-backlash-over-historically-inaccurate-pictures.
[6] Google’s system was not the only AI tool that was biased to show historically inaccurate representations of members of various groups. See, e.g., Hammer & Pyle, Adobe Firefly is latest to suffer woke backfire after AI-generated images show black NAZIS, black Vikings and black male and female Founding Fathers – after Google Gemini furor, Daily Mail, May 4, 2024, https://www.dailymail.co.uk/news/article-13194153/Adobe-firefly-AI-google-gemini-black-nazis-vikings.html.
[7] See, e.g., Rinkerman, AI Proxy Wars: The Struggle For Control Of Corporate Adoption And Use Of Artificial Intelligence Technologies, George Mason University, Center for Excellence in Government Cybersecurity Risk Management and Resilience for a discussion of AI-related shareholder proposals as a means by which shareholders can attempt to secure more transparency and control regarding the target corporation’s AI-related policies and decision-making. https://crc.gmu.edu/ai-proxy-wars-the-struggle-for-control-of-corporate-adoption-and-use-of-artificial-intelligence-technologies-article-by-gary-rinkerman/. See also alternative publication forum at George Mason University, Center for Assurance Research and Engineering, https://care.gmu.edu/ai-proxy-wars-the-struggle-for-control-of-corporate-adoption-and-use-of-artificial-intelligence-technologies-article-by-gary-rinkerman/.
[8] Hess, Google pauses its Gemini AI tool after critics blasted it as ‘too woke’ for generating images of Asian Nazis in 1940 Germany, Black Vikings and female medieval knights, Daily Mail, Feb. 22, 2024, https://www.dailymail.co.uk/sciencetech/article-13114705/google-pauses-gemini-ai-woke-images.html.
[9] “No-One Driving” is a song by John Foxx that appears on his Metamatic album, which was released in 1980. The placement of the lyric within the context of Foxx’s prominent use of electronic instruments (synthesizers, drum machines, electronic percussion) evokes (and was influenced by) the dystopian sci-fi novels of J.G. Ballard – and the sense of being overwhelmed by rigid but out-of-control technology. A good discussion of Foxx and his work can be found in John Foxx: Howling Into The Void, Classic Pop (April 13, 2025), https://www.classicpopmag.com/features/john-foxx-howling-into-the-void/.
[10] “Pay no attention to that man behind the curtain” is a line spoken by The Wizard of Oz, played by Frank Morgan, in the Metro-Goldwyn-Mayer (MGM) film The Wizard of Oz (1939). The line has gained independent traction in popular culture to signify the attempt to conceal or anonymize the individual(s) responsible for the promulgation of technology-driven hokum.
[11] A particularly interesting study on users’ perceptions of AI-presented inaccuracies (even about the users) is described in Wang, Anyi, Das Swain, and Goel, Navigating AI Fallibility: Examining People’s Reactions and Perceptions of AI after Encountering Personality Misrepresentations, Cornell University (Submitted May 25, 2024), https://arxiv.org/pdf/2405.16355. Some of the study’s conclusions are particularly noteworthy: “We emphasized the ever-evolving nature of people’s AI knowledge acquired from viewing AI outputs by pinpointing three rationales that people adopted to interpret AI (mis)representations: AI works like a machine, a human, and/or magic. These rationales are bounded by people’s existing AI knowledge and are highly connected to people’s tendency to over-trust, rationalize, forgive AI misrepresentations.” The core observation gleaned from the study was stated by the authors as follows: “We also found that people’s existing AI knowledge, i.e., AI literacy, could significantly moderate changes in people’s trust in AI after encountering AI misrepresentations. We discussed how people navigate AI fallibility through their evolving AI knowledge and provided implications for designing and developing responsible mitigation strategies that consider people’s evolving AI knowledge to reduce potential harms when AI fails.” So, if popular culture gave many of us our first widespread insights into AI fallibility and helped to dispel the false auras of “magic” and accuracy perhaps we should thank Gemini’s Lysergic Vikings – but don’t let them assist in designing or running our homes.
[12] A precursor statement to the more developed approach in the AI Action Plan can be found in President Trump’s Executive Order 14179 of January 23, 2025 (Removing Barriers to American Leadership in Artificial Intelligence) which includes provisions revoking the Biden Administration’s Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). The revocation is based, in part, on the perception that Executive Order 14110 encouraged bias and inappropriate social engineering aspirations in the development and employment of government AI. Trump’s Executive Order 141179 states that US leadership in AI innovation requires that “we must develop AI systems that are free from
ideological bias or engineered social agendas.” See Fed. Reg. Vol. 90, No. 20 Friday, January 31, 2025, pp. 8741-8742. See https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.
[13] See https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/.
[14] Id., AI Action Plan, p. 4, https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[15] AI Action Plan, p. 4, https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[16] Id.
[17] See, e.g., Citizens United v. FEC, 558 U.S. 310, 365 (2010)(government may not, under the First Amendment, suppress political speech on the basis of the speaker’s corporate identity); United States v. Safehouse, 2025 WL 2080096 (3rd. Cir. July 24, 2025) (the purpose of extending rights to corporate persons is to protect the rights of natural persons acting through the corporate form). Against this backdrop, however, it is notable that corporate speech in furtherance of tortious activities is not sheltered. For example, unless certain exceptions apply, e.g., fair use, trademark infringement and dilution can be subject to injunction although the injurious use of marks is a form of speech. See Jack Daniel’s Properties, Inc v. VIP Prod. LLC, 599 U.S. 140 (2023); See also e.g., Meckes, No First Amendment Right to Confuse Consumers, High Court Holds, Global IP & Technology Law Blog, June 8, 2023, https://www.iptechblog.com/2023/06/no-first-amendment-right-to-confuse-consumers-high-court-holds/; Hudson, Trademarks and the First Amendment, Free Speech Center At Middle Tennessee State University, (updated July 25, 2025).
[18] An interesting discussion of this school of thought can be found in Burk, Asemic Defamation, Or, The Death Of The AI Speaker, First Amendment Law Review, Vol. 22, pp. 189-232 (2024).
[19] If you are simply using the AI for generating entertaining fiction, the issues begin to dissolve. For example, as long as you are aware the system is generating nonsense and inaccuracies, it is OK to receive an AI-generated affirmation that the Hoover Dam is located on the dark side of the moon.
[20] AI Action Plan, p. 4, https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[21] Id. https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[22] This quote is frequently attributed to Thomas Jefferson. Many, like myself, will be disheartened to learn that there is no evidence to confirm that Thomas Jefferson ever said “eternal vigilance is the price of liberty.” See Berkes, Eternal Vigilance (Sept. 7, 2010), https://www.monticello.org/exhibits-events/blog/eternal-vigilance/; Deis, Eternal vigilance is the price of liberty, this day in quotes (Jan. 28, 2023) https://www.thisdayinquotes.com/2023/01/eternal-vigilance-is-the-price-of-liberty/. The irony of this apparent misattribution, especially in the context of this article, is worth a bit of amused contemplation.
[23] Michael J. Kratsios (Assistant to the President for Science and Technology), David O. Sacks (Special Advisor for AI and Crypto) and Marco A. Rubio (Assistant to the President for National Security Affairs), AI Action Plan, Introduction, p.2. https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[24] AI Action Plan, p. 10, https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[25] AI Action Plan, p. 10, https://whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[26] As explained in Ramlochan, Openness in Language Models: Open Source vs Open Weights vs Restricted Weights, Prompt Engineering & AI Institute (Dec. 12, 2023): “Weights are the output of training runs on data and are not human-readable or debuggable. They represent the knowledge an artificial neural network has learned. In the context of AI, open weights refer to the availability of these weights for use or modification.” https://promptengineering.org/llm-open-source-vs-open-weights-vs-restricted-weights/.
[27] This quotation is accurately attributed to Thomas Jefferson. See Thomas Jefferson, Letter to Richard Price, January 8, 1789, Library of Congress, https://www.loc.gov/resource/mtj1.010_0744_0750/?sp=1, transcription, National Archives, Thomas Jefferson, Letter to Richard Price, January 8, 1789, https://founders.archives.gov/documents/Jefferson/01-14-02-0196. See also, note on “the price of liberty,” above.
[28] Executive Order 14179 of January 23, 2025 (Removing Barriers to American Leadership in Artificial Intelligence), See Fed. Reg. Vol. 90, No. 20 Friday, January 31, 2025, pp. 8741-8742. See https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.
Click this link for the original source of this article.
Author: Gary Rinkerman
This content is courtesy of, and owned and copyrighted by, http://libertysentinel.org and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.