State Attorneys General Urge Congress: Do Not Block State-Level AI Legislation

Artificial intelligence has entered nearly every corner of modern life—from healthcare and transportation to finance, education, and national security. But as AI adoption accelerates, so do concerns surrounding its risks, ethical implications, and its misuse in everyday society. In one of the strongest political statements yet on AI oversight, attorneys general from 35 U.S. states and the District of Columbia have delivered a direct message to Congress: Do not strip states of their power to regulate AI.

In a formal letter sent to congressional leaders, this bipartisan coalition warned that halting or overriding state-level AI laws could invite “disastrous consequences” for Americans. The message was clear: while Congress continues to debate what national AI standards should look like, states must maintain the authority to enforce their own protections.

Led by attorneys general from New York, North Carolina, Utah, and New Hampshire, the group emphasized that AI is already producing real-world harm—and delaying regulation in favor of federal uniformity could leave millions vulnerable.


A National Warning: States Demand the Right to Regulate AI

New York Attorney General Letitia James, who played a central role in drafting the letter, stressed that states cannot afford to wait for Washington to reach consensus.

“Every state should be able to enact and enforce its own regulations regarding AI to protect its residents.”

Their statement highlights a growing divide between state governments trying to safeguard citizens and an ongoing federal gridlock that has left AI largely unregulated at the national level.

While Congress continues to explore legislative proposals on artificial intelligence, none have yet made it into law. Meanwhile, AI technologies are developing at lightning speed, outpacing traditional policymaking cycles.


The Growing Clash: Tech Giants vs. State Regulators

The letter from the attorneys general comes as major technology companies push aggressively for federal preemption—a system in which federal AI laws override state laws.

What Tech Companies Want

Companies including OpenAI, Meta, Google, and major investors such as Andreessen Horowitz argue that multiple, differing state laws would:

  • Create confusion
  • Increase compliance costs
  • Slow innovation
  • Fragment the national AI ecosystem

These corporations prefer one uniform federal framework they believe would make it easier to build, deploy, and maintain AI systems across all 50 states.

What States Say in Response

The attorneys general countered that blocking state laws without replacing them with federal standards would leave Americans defenseless. They argue that:

  • AI is already causing real harm
  • Regulations are urgently needed
  • States have traditionally protected consumers when federal leadership has lagged

They also cited documented cases involving injuries, misuse, psychological harm, and even fatalities associated with unregulated AI tools—including chatbots generating dangerous recommendations.


States Are Already Acting While Congress Debates

The letter emphasized that waiting for Congress to craft comprehensive federal legislation is not an option. Many states have already enacted their own AI safeguards.

Notable State-Led AI Regulations:

1. Combating Non-Consensual AI-Generated Sexual Images

Multiple states now make it a crime to create or distribute AI-generated explicit images without consent—addressing a rapidly growing form of digital abuse.

2. AI Restrictions in Political Advertising

Several states have introduced or passed legislation banning the use of deepfakes and AI-generated content in political campaigns to prevent voter manipulation.

3. AI Use Limits by Insurance and Healthcare Providers

A number of states now restrict how AI can be used when:

  • Assessing insurance eligibility
  • Determining coverage
  • Evaluating medical decisions

4. Colorado’s Landmark Anti-Discrimination AI Law

Colorado passed one of the nation’s first comprehensive laws requiring companies to ensure their AI systems do not discriminate in:

  • Housing
  • Employment
  • Education

Tech firms have criticized these rules, saying they impose unrealistic burdens on developers—but states argue they protect fundamental civil rights.

5. California’s Sweeping AI Accountability Measures

California—home to many leading AI companies—has enacted some of the toughest AI transparency rules in the country.

Starting in 2026, companies must:

  • Disclose and document the data used to train AI models
  • Provide tools that can identify AI-generated content
  • Submit detailed risk-mitigation plans for advanced AI systems

California Attorney General Rob Bonta also supported the multi-state letter, signaling that even tech-heavy states believe regulation is necessary.


Federal Pushback: The Trump Administration’s Position

The battle over AI governance has escalated as the Trump administration has called for restrictions on state autonomy.

President Donald Trump recently urged Congress to:

  • Ban state AI laws
  • Include this ban in the National Defense Authorization Act
  • Assert federal authority to create uniform AI rules

Sources indicate the administration even considered extreme measures such as:

  • Suing states for passing AI regulations
  • Withholding federal funding
  • Using federal supremacy arguments to invalidate state AI policies

As of the latest updates, these actions have been paused—but the conflict remains unresolved.


Congress Has Already Rejected a Nationwide Ban Once

Earlier this year, the U.S. Senate voted 99–1 to reject a proposal that would have blocked states from regulating AI. Legislators from both parties agreed that:

  • States have a constitutional right to protect citizens
  • Federal legislation is far from ready
  • Eliminating state authority would expose millions to unchecked AI risks

This overwhelming bipartisan consensus suggests that Congress is hesitant to strip states of their power—particularly without offering a national regulatory framework in return.


Why the Fight Over State Authority Matters

The core issue is not whether the federal government should regulate AI—it absolutely should. The question is:

Should states lose their power to act while Congress remains gridlocked?

The attorneys general argue that the stakes are incredibly high. AI systems now influence:

  • Housing decisions
  • Hiring and firing
  • Criminal sentencing
  • Healthcare diagnostics
  • Educational opportunities
  • Financial lending
  • Online safety and content moderation

Without regulation, these systems can reinforce biases, make incorrect predictions, and expose individuals to deception, manipulation, or harm.

Real-World Consequences Are Already Here

Examples cited include:

  • AI chatbots giving dangerous advice
  • Deepfake crimes multiplying
  • Algorithmic discrimination in housing and insurance
  • AI-generated political misinformation
  • Non-consensual explicit deepfake abuse
  • Automated decisions unfairly affecting jobs and benefits

With these harms increasing daily, states argue they cannot sit idly by.


What Happens Next? The Future of AI Regulation in America

With many state laws set to take effect in 2026, the next two years will likely determine whether:

  • AI regulation becomes a state-by-state patchwork
  • Congress finally passes comprehensive national rules
  • Federal preemption overrides state authority
  • A hybrid system emerges

Possible Outcomes:

1. A Decentralized, State-Led Approach

States continue passing diverse laws tailored to local needs—similar to data privacy laws today.

2. A Unified Federal Framework

Congress eventually passes national AI standards that override conflicting state laws, but this requires bipartisan cooperation that currently seems unlikely.

3. A Dual System

States regulate consumer protection and discrimination, while federal laws govern national security and high-risk AI.

4. A Legal Battle

If the federal government attempts to ban state laws, a major constitutional fight could reach the Supreme Court.


Why States Refuse to Back Down

The attorneys general maintain that their primary duty is to protect residents. Without state-level enforcement:

  • AI abuses may go unchecked
  • Corporations may face fewer consequences
  • Citizens could be subject to new forms of exploitation
  • Innovation may outpace ethical safeguards
  • AI risks could grow exponentially before federal action arrives

Their message to Congress is unmistakable:

“Until federal lawmakers act, do not block states from defending their people.”


Conclusion: A Defining Moment for American AI Governance

The fight over artificial intelligence regulation is shaping up to be one of the defining policy battles of the decade. On one side stand state leaders demanding immediate protections for their residents; on the other stand federal officials and tech giants advocating for a single nationwide standard.

With AI expanding at unprecedented speed—and real-world harms mounting—Americans face profound questions about safety, privacy, civil rights, and the future of technology in society.

What remains clear is that state attorneys general are prepared to defend their authority fiercely. As they emphasize, regulation delayed may be regulation denied—and in the world of AI, delays can have irreversible consequences.

Whether AI governance in the U.S. becomes decentralized, federally standardized, or a hybrid model will depend on decisions made in the months ahead. But for now, the states’ message is unmistakable: they will not stand down while waiting for Congress to act.