AI Oversight: Will the States Save Us?
Hey there, AI-Curious friends!
I can’t believe it’s been almost a year since I launched this newsletter. I was just looking back at that very first issue, where I excitedly unpacked Verity Harding's book "AI Needs You" and her compelling argument for citizen engagement in AI governance. Remember how optimistic we once were about thoughtful regulation and public participation guiding our AI future?
Well, about that...
A year later, the regulatory landscape has evolved in ways both predictable and surprising. I’m struck that we've fallen into that classic American pattern: while the EU moves ahead with comprehensive regulation, we're once again placing our hopes in state-by-state approaches as federal action remains elusive, or worse.
The EU Does Something, The US (Basically) Doesn’t
Last week, the EU formally adopted its AI Act – the world's first comprehensive legal framework for artificial intelligence. Love it or hate it (and there are valid arguments on both sides), you've got to admire the coordinated approach. Clear risk categories, consistent rules across member states, and substantial penalties for non-compliance. Is it perfect? Nope. Will it need revisions as AI evolves? Absolutely. But it's a coherent starting point.
Meanwhile, in America? We're doing what we always do – relying on our federalist roots to create a regulatory patchwork that will likely become a compliance nightmare for companies while offering inconsistent protections for citizens. Sad but true.
The Copyright Mess We're In
Nowhere is this clearer than in the ongoing battles over AI and copyright. No definitive court rulings have fully resolved whether using copyrighted materials for AI training constitutes fair use and this leaves creators in limbo.
The New York Times lawsuit against OpenAI has brought these tensions into the spotlight. And while the Copyright Office has made some initial determinations, basically saying AI-generated works aren't copyrightable without substantial human input, the waters get even murkier when you consider that some AI companies have begun seeking licensing agreements with content owners (like the deals between some news organizations and OpenAI/Microsoft.)
What does this mean? AI companies are willing to sign licensing agreements if the entity is big and litigious enough? Um, ok…
Federal copyright law wasn't built for this moment. And comprehensive updates? Don’t hold your breath given the current political climate.
State Houses as AI Battlegrounds
So where's the action happening? In state capitals across the country:
California is leading the charge in AI regulation creating the most comprehensive state-level approach yet, focusing on risk management and transparency requirements.
New York's considering legislation requiring AI disclosure and audit requirements in employment and housing contexts.
And predictably, states like Texas and Florida are charting their own paths, focusing more on preventing what they view as politically biased content moderation while imposing fewer constraints on development.
If you want to have a voice, you can find out how to reach your representatives here:
Why We're Here (Again)
The reasons for federal inaction feel like déjà vu all over again:
Most lawmakers still struggle to understand the technology they're trying to regulate (those congressional hearings, yikes!)
Tech lobbying budgets have exploded as companies fight for favorable rules
The partisan divide on regulation seems to grow wider by the day
And yes, there are legitimate concerns about stifling innovation with hastily crafted rules
President Biden's Executive Order last year was a start, but executive actions can only go so far without legislation to back them up.
Where Do We Go From Here?
If you are going to do something, focus on the state level.
Know your state's approach: Is your state legislature considering AI regulations? Who are the key players? What's their perspective? Put pressure on your state and local representatives.
Cross-pollinate good ideas: When you see thoughtful approaches in one state, advocate for them in yours.
Advocate within your workplace: Let your employer know that AI policy matters to you. Ask about your company's AI usage policies or suggest creating them if they don't exist. Join or initiate an AI ethics committee or working group at your organization. Companies are stakeholders too, and internal advocacy can drive responsible AI adoption from the inside out.
This may not be the coordinated, thoughtful approach to AI governance that Harding envisioned a year ago. But it's the reality we've got, and engagement at the state level might be our best path forward for now. It’s a familiar refrain right now but it really comes down to this: if we want AI oversight, it’s going to be up to us.
Until next time, stay curious,
Emily
Sources:
https://www.nytimes.com/2025/03/21/podcasts/hardfork-ai-action-plans.html?