Clause in Budget Bill Could End State AI Regulations

by Larry Magid

One of the provisions in the “big, beautiful” budget bill would ban states from regulating AI for the next 10 years.

The bill, which narrowly passed the House by a single vote (215–214), includes a clause stating: “No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act,” which means the provision would take effect immediately upon being signed.

The entire budget bill is now being considered by the Senate, which could eliminate or modify this section.

I’m a big fan of generative AI, but I also recognize that any powerful technology comes with risks and unintended consequences. Just as we have laws to regulate airlines, vehicles, and food and drugs, we also need thoughtful oversight of AI.

Skepticism about state internet laws

As someone who has closely followed internet regulation since the early 1990s, I’ve often been critical of state-level legislation. Not just because of what some of these bills attempt to do, but because they risk creating a patchwork of conflicting laws that are difficult for companies to navigate. It’s one thing to regulate activity that occurs entirely within a state’s borders, but quite another to try to govern a “product” that inherently transcends both state and national boundaries.

Although I prefer thoughtful federal legislation to state-level internet controls, I recognize that the federal government is often very slow in enacting consumer protection laws. I love our system of government, but even under normal circumstances, it’s not easy to get consensus in a country as large and diverse as ours, and it’s especially difficult in today’s highly polarized political climate.

In an ideal world, the federal government would take the lead in regulating AI. But given the current Congress and White House, that’s unlikely to happen anytime soon. In the meantime, it’s often state and local governments that fill the gap in protecting consumers.

Could ban a California medical disclosure law

If the Senate passes and the president signs the bill with this provision, it will not only curtail future legislation but prevent states from enforcing laws that are already on the books. For example, last year both houses of California’s legislature unanimously passed  the “Health care services: artificial intelligence act” (AB 3030), which requires health care providers to “include both a disclaimer that indicates to the patient that a communication was generated by generative artificial intelligence” and “clear instructions describing how a patient may contact a human health care provider, employee, or other appropriate person.”

I love that my health care provider uses audio recording and AI to generate detailed reports after each visit with my primary care physician. But the first time I saw one on my patient portal, I was puzzled by how comprehensive it was — and amazed that my doctor could recall everything we had discussed. Only after doing a bit of research did I learn that the report was generated by AI using Microsoft’s DAX Copilot ambient-listening technology.  Patients shouldn’t have to be internet sleuths to get such a basic disclosure, but the budget bill could render the requirement unenforceable.

Tennessee could be “All Shook Up” over the provision

There are plenty of other state AI regulation laws already on the books or under consideration across the country, including the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which was signed into law by Tennessee Gov. Bill Lee last March after unanimous passage by the state’s overwhelmingly Republican legislature. If the U.S. Senate passes the budget bill with this provision, ELVIS will have “left the building.”

Sen. Marsha Blackburn (R-TN) has expressed opposition to the AI clause in the budget bill. “We certainly know that in Tennessee we need those protections,” she said during a hearing, “And until we pass something that is federally preemptive, we can’t call for a moratorium.”

And speaking of AI disclosure, I used ChatGPT to help find sources for this article, but I verified all the facts and did my own writing.

This post is excerpted from one that first appeared in the Mercury News