US AI legislation is rapidly evolving as artificial intelligence (AI) transforms industries and raises new ethical and regulatory challenges. In 2025, US AI legislation reflects a patchwork of state-led initiatives and limited federal action, addressing concerns like bias, transparency, and consumer protection. With over 40% of enterprise-scale businesses adopting AI, lawmakers are racing to balance innovation with accountability.
Federal AI Legislation: A Fragmented Approach So Far
At the federal level, comprehensive AI legislation remains absent, with policymakers relying on targeted bills and executive actions. The National Artificial Intelligence Initiative Act of 2020, part of HB 6395 William M (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 (Public Law 116-260, Division E), focuses on promoting AI research and development (R&D) rather than imposing strict regulations. It coordinates AI activities across federal agencies to maintain U.S. leadership in AI innovation.
In 2023, the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence set guidelines for federal agencies, emphasizing cybersecurity, critical infrastructure standards, and consumer protection. It directed agencies like the Department of Energy and Defense to apply existing laws to AI development and mandated “red-teaming” to stress-test dual-use AI models for security risks.
Bipartisan bills introduced in 2024, such as the AI Advancement and Reliability Act (H.R. 9497) and the Future of Artificial Intelligence Innovation Act (S. 4178), aim to establish the AI Safety Institute within the National Institute of Standards and Technology (NIST) to evaluate and develop guidelines for AI models. The CREATE AI Act (H.R. 5077; S. 2714) seeks to make the National AI Research Resource permanent, supporting AI R&D. These bills enjoy bipartisan support but have not yet passed, reflecting a lack of consensus on comprehensive federal AI regulation.
The Federal Artificial Intelligence Risk Management Act of 2023 (S. 3205) requires federal agencies to adopt NIST’s AI Risk Management Framework, providing standards for AI development, procurement, and cybersecurity. While voluntary for private entities, it sets a precedent for risk-based AI governance. The AI Disclosure Act of 2023 proposes mandatory disclosures for AI-generated outputs, treating violations as deceptive practices under the Federal Trade Commission (FTC) Act.
A proposed 10-year moratorium on state AI regulation was included in an earlier draft of the One Big Beautiful Bill Act (H.R. 1), introduced by the House Energy and Commerce Committee as part of the 2025 budget reconciliation process. This provision aimed to centralize AI governance at the federal level, preventing states from enacting AI-specific laws. However, on July 1, 2025, the U.S. Senate voted 99-1 to remove the moratorium, with Sen. Marsha Blackburn (R-TN) leading the amendment to strike it, following opposition from state attorneys general, governors, and advocacy groups who argued it would undermine consumer protections and state sovereignty. A compromise to reduce the moratorium to five years with exemptions for child safety and publicity rights failed due to concerns over vague language. The final version of the bill, signed into law on July 4, 2025, allows states to continue regulating AI, preserving existing and future state AI laws.
State AI Legislation: Filling the Federal Void
In the absence of comprehensive federal US AI legislation, states have taken the lead, creating a diverse regulatory patchwork. Since 2019, 17 states have enacted 29 AI-related bills, focusing on data privacy, accountability, and transparency. Below are key state-level developments:
California
California has been a frontrunner in AI regulation. The California AI Transparency Act (SB-942), effective January 1, 2026, requires developers of generative AI systems to provide free AI detection tools and allows users to mark AI-generated content. It imposes penalties of $5,000 per violation for non-compliance. California Assembly Bill 2013, also effective January 1, 2026, mandates that developers disclose summaries of datasets used to train generative AI systems. The Health Care Services: Artificial Intelligence Act (AB 3030) requires healthcare providers using generative AI for patient communications to include disclaimers and provide access to human providers.
However, California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, introduced in 2024, was vetoed by Governor Gavin Newsom. It aimed to mandate safety tests for powerful AI models but faced opposition for potentially stifling innovation.
Colorado
The Colorado AI Act, effective February 1, 2026, adopts a risk-based approach, targeting “high-risk” AI systems that make consequential decisions in areas like employment, healthcare, and housing. It defines algorithmic discrimination as unlawful differential treatment based on protected characteristics and requires developers and deployers to conduct impact assessments and disclose risks. The state’s attorney general has exclusive enforcement authority, with violations treated as deceptive trade practices.
Utah
Utah’s Artificial Intelligence Policy Act (S.B. 149), effective May 1, 2024, and extended to 2027, establishes liability for undisclosed AI use that violates consumer protection laws. It created the Office of Artificial Intelligence Policy and a regulatory AI analysis program, emphasizing transparency and consumer protection.
Tennessee
Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act (HB 2091), effective July 1, 2024, targets AI-generated deepfakes by prohibiting unauthorized use of a person’s voice or likeness. It strengthens protections for intellectual property and privacy, particularly for musicians.
Other States
- Illinois: The HB 3773, effective January 1, 2026, amends the Illinois Human Rights Act to prohibit employers from using AI in ways that discriminate based on protected classes. It also bans zip code-based recruitment to prevent algorithmic bias.
- Texas: Legislation introduced in 2024 regulates high-risk AI systems used by state agencies to prevent discrimination.
- Virginia and Washington: Both states are considering AI transparency bills to ensure users know when they interact with AI systems.
- Hawaii: HB 465, introduced January 21, 2025, prohibits retailers from using AI-enabled dynamic pricing for food items in federal assistance programs.
Key Themes in US AI Legislation
State and federal AI legislation in 2025 focuses on several recurring themes:
- Algorithmic Bias and Discrimination: Laws like Colorado’s AI Act and Illinois’ HB 3773 aim to prevent AI systems from perpetuating bias in consequential decisions.
- Transparency and Disclosure: California’s SB-942 and Utah’s AI Policy Act require clear labeling of AI-generated content and disclosures about AI use.
- Consumer Protection: States emphasize protecting consumers from AI-related harms, such as deepfakes (Tennessee’s ELVIS Act) or deceptive practices.
- Data Privacy: States like California and Tennessee incorporate AI into existing data privacy frameworks, requiring impact assessments for automated profiling.
Challenges and Future Outlook
The lack of federal US AI legislation to date has led to a complex regulatory landscape, with states adopting varied approaches. This patchwork creates compliance challenges for businesses operating across state lines, as requirements differ significantly.
At the federal level, bipartisan support for targeted AI bills suggests progress toward standardized guidelines, but comprehensive legislation remains elusive due to differing priorities. The Trump administration’s expected lighter regulatory approach may prioritize innovation over strict oversight, potentially shifting enforcement to existing laws like those enforced by the FTC.
Globally, the European Union’s AI Act, effective August 1, 2024, provides a comprehensive framework that contrasts with the U.S.’ fragmented approach, potentially influencing future federal efforts. States like California and Colorado may continue to draw inspiration from the EU’s risk-based model.
US AI legislation in 2025 reflects a dynamic interplay between state innovation and federal caution. States like California, Colorado, and Utah are leading with targeted laws addressing AI’s risks, while federal efforts focus on R&D and voluntary frameworks. As AI adoption grows, balancing innovation with ethical governance will remain a critical challenge for policymakers. Businesses must stay informed about evolving state and federal regulations to ensure compliance in this rapidly changing landscape.
BillCam – DailyClout’s FREE Legislation Tracking Tool
LegiSector – DailyClout’s Industry-Specific Legislation Tool
The post Current US AI Legislation: State and Federal Developments in 2025 appeared first on DailyClout.
Click this link for the original source of this article.
Author: DailyClout
This content is courtesy of, and owned and copyrighted by, https://dailyclout.io and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.