AI Safety Theater: When Regulation Becomes Performance
There’s a hearing happening right now in Congress about AI safety. Senators are asking why ChatGPT can write bomb-making instructions, while the AI systems actually routing their healthcare claims and approving their mortgages operate in complete regulatory darkness. This is what happens when policy theater replaces policy thinking.
Diogenes would have loved this. Here’s a room full of people performing concern about the wrong problem, while the actual problem happens in systems they don’t understand, run by companies they can’t see, making decisions they’ll never audit.
The Security Theater Playbook
We’ve seen this movie before. After 9/11, we built an elaborate aviation security apparatus focused on the specific threats of September 10th, 2001. Box cutters got banned. Shoes got X-rayed. Water bottles became weapons of mass destruction. Meanwhile, the actual vulnerabilities in our transportation infrastructure remained largely unaddressed, because fixing them would have required admitting how insecure they were in the first place.
The AI safety conversation is following the same script. We’re having public hearings about chatbots writing college essays, while the algorithms determining who gets hired, who gets healthcare, and who gets arrested operate without meaningful oversight. We’re debating whether GPT-4 might develop consciousness, while machine learning models trained on biased historical data systematically perpetuate discrimination at scale.
The pattern is always the same: regulate the visible, ignore the consequential.
What AI Safety Actually Looks Like
Real AI safety isn’t about preventing science fiction scenarios. It’s about preventing the boring dystopia that’s already here. It’s about ensuring that when an AI system makes a decision that affects someone’s life, that person has:
- The right to know an AI system was involved
- The right to understand how the decision was made
- The right to challenge the decision
- The right to human review
That’s it. Not consciousness tests. Not Turing tests. Not theories about artificial general intelligence. Just basic due process for algorithmic decision-making.
But here’s the problem with basic due process: it’s expensive to implement and impossible to market. You can’t run political ads about “incremental improvements to administrative law.” You can’t get clicks with headlines about “modest transparency requirements for predictive analytics.”
Safety is boring. Safety theater is exciting. Guess which one gets funded.
The Expertise Performance
Watch how these hearings work. Senators ask questions designed to demonstrate their understanding of technical concepts they learned from their staff briefings ten minutes earlier. Tech executives provide answers designed to sound responsive without committing to anything specific. Academic experts explain why the question itself reveals a fundamental misunderstanding of the technology.
Nobody’s actually trying to solve anything. They’re trying to look like they’re trying to solve something.
Meanwhile, the people who actually build and deploy these systems — the data scientists, the machine learning engineers, the platform architects — aren’t in the room. Because the people who understand the technology well enough to regulate it effectively are the same people who would have to live with the regulations they create. And that’s considered a conflict of interest.
This is like regulating aviation safety without involving pilots because pilots have a vested interest in planes not crashing.
The Regulatory Capture That Hasn’t Happened Yet
Here’s what’s particularly interesting about the current AI safety conversation: the industry is asking to be regulated. Which should make everyone immediately suspicious, because industries don’t generally beg for regulatory oversight unless they think they can shape it to their advantage.
The big AI companies want regulation that looks substantial but doesn’t fundamentally threaten their business models. They want safety requirements that are expensive enough to keep competitors out of the market, but flexible enough that they can meet them without changing how they actually operate.
This is why you see proposals for “AI safety research institutes” and “algorithmic auditing frameworks” and “AI ethics review boards.” All of which sound serious and responsible, and all of which can be satisfied by hiring the right consultants and producing the right paperwork.
Meanwhile, the actual safety interventions — requiring algorithmic transparency, mandating human review for consequential decisions, establishing liability for algorithmic bias — get dismissed as “stifling innovation.”
The Athens vs. Sparta Problem
In ancient Greece, Athens had elaborate democratic institutions and Sparta had straightforward military discipline. When they went to war, guess who won?
Democratic deliberation is great for making decisions where you can afford to be wrong for a while. It’s terrible for making decisions where delay itself is dangerous.
The AI safety conversation is happening in Athenian time (committee hearings, expert testimony, public comment periods) while AI deployment is happening in Spartan time (ship the product, iterate based on user feedback, ask forgiveness rather than permission).
This temporal mismatch isn’t an accident. It’s a feature of the system from the industry’s perspective. Every day that regulation is delayed is another day to establish market position, accumulate user data, and create dependencies that make retroactive oversight more difficult.
What Diogenes Would Do
Diogenes once defaced currency to make a point about artificial value. If he were here today, I think he’d ask a simple question: what if we stopped talking about AI safety and started talking about software liability?
We don’t need new laws about artificial intelligence. We need to enforce existing laws about truth in advertising, consumer protection, and civil rights. When a hiring algorithm discriminates, that’s discrimination. When a credit scoring model uses illegal data, that’s illegal data use. When a medical AI misdiagnoses patients, that’s medical malpractice.
The technology doesn’t create new legal categories. It just makes it easier to cause harm at scale.
But here’s the thing about that approach: it would require companies to take responsibility for the decisions their software makes. And it would require regulators to understand how the software works. And it would require the public to care about boring procedural questions instead of exciting theoretical ones.
In other words, it would require everyone to stop performing and start working.
The Honesty That’s Missing
The real AI safety conversation isn’t happening in Congress. It’s happening in corporate risk management meetings, where lawyers are trying to figure out how much liability their companies have for algorithmic decisions. It’s happening in insurance companies, where actuaries are trying to price the risk of AI-related lawsuits. It’s happening in engineering teams, where developers are trying to build systems that won’t destroy their users’ lives.
That conversation is boring, technical, and focused on preventing specific kinds of foreseeable harm. It doesn’t get congressional hearings, because it doesn’t fit the narrative structure of a technological thriller.
But it’s the conversation that might actually keep AI systems from causing the kinds of systematic, large-scale, mundane damage that happens when you deploy powerful tools without adequate safeguards.
The emperor isn’t naked. He’s wearing very expensive clothes that don’t actually protect him from the weather. And everyone’s too polite to point out that fashion isn’t the same thing as function.
🏮
The lantern reveals what’s there, not what we wish were there.