We Will Be Driving Autonomously Long Before Any AI Provider Is Liable for Generated Code
Reading time: 5 minutes

At some point, no human will need to be behind the wheel anymore. That’s not a question of if, but when. Yet while autonomous vehicles are on their way to being fully accepted — socially and legally — will an AI provider ever take on liability for generated code? Very unlikely — and for one fundamental reason: the difference between a closed and an open system.
The Steering Wheel Can Be Handed Over — Because Roads Have Rules
Road traffic is complex, but it is bounded. Traffic signs, lane markings, right-of-way rules, braking physics — the domain of autonomous vehicles follows a finite rulebook. There are edge cases, of course: bad weather, unpredictable pedestrians. But the defining characteristic remains: statistics can win. If an autonomous system demonstrably causes accidents ten times less frequently than a human driver, the data will eventually become undeniable. And where risk is calculable, it is insurable. Insurance follows mathematics — not philosophy.
We already know this principle: aircraft autopilots and automated rail systems have long carried responsibility for human lives, because their superiority is statistically proven and regulatorily embedded. The automobile will follow.
Code Knows No Roads
With AI-generated code, this logic doesn’t hold. There is no rulebook that covers all cases. Every software project is unique — with its own architecture, its own dependencies, its own business context, and above all: its own human requirements. And that is precisely the problem. Code is only ever as good as the requirements it’s built on. What “correct” or “complete” means is not decided by a model — it’s decided by the person who formulates the task, the developer who evaluates the output, and the company that puts the system into operation.
The chain of responsibility is therefore broken at its very foundation: requirements, prompt, generation, review, deployment — at exactly which point is an AI provider supposed to be liable? The risk of damage is also fundamentally open-ended: data loss, system failures, security vulnerabilities, privacy breaches, business interruptions. A tail risk with no ceiling. No insurance logic in the world can meaningfully calculate this domain.
An Example That Says It All
In March 2025, an AI agent based on Claude Code permanently deleted 2.5 years of content from a learning platform. The incident made clear what has long been written into the terms and conditions of all major AI providers: no liability. Not for data loss, not for system failures, not for the consequences of generated code. This is not an oversight, nor a temporary state of affairs. It is the only rational position for a provider selling tools — not placing contractors.
Software Developers Are Not Being Replaced — They’re Becoming More Indispensable
At this point, it’s worth directly addressing a widespread misconception: AI does not make software developers redundant. Quite the opposite.
Precisely because no AI provider will ever be liable for generated code, a human must bear this responsibility — whether they like it or not. And that human is the software developer. They are the ones who question requirements, evaluate generated code, identify security risks, take responsibility for architectures, and must answer for things when damage occurs. AI accelerates the writing of code. It does not replace the understanding of systems, the weighing of tradeoffs, or the judgment about what a piece of software can do in a specific environment.
Anyone who believes a company can simply type in prompts and receive finished, responsibly deployable software has missed the fundamental point: complexity shifts — it doesn’t disappear. What was once implementation effort becomes tomorrow’s requirements, review, and control effort. The developer who understands what they’re reviewing, what they’re deploying, and what they’re liable for becomes more valuable, not cheaper.
Why This Will Stay That Way
An AI provider that were liable for generated code would have to assume responsibility for systems over whose requirements, design, and operation it had absolutely no influence. That is structurally unsolvable — regardless of how good the models become. Unlike with autonomous driving, there is no benchmark for “better code than a human in this specific context.” The superiority is not measurable, therefore not insurable, therefore not subject to liability. Without legislative compulsion — and that is still a long way off in the software domain — there is no market mechanism that will change this.
What This Means in Practice
AI-generated code is a powerful tool. But it remains a tool — not a contractor, not a service provider with a warranty. Anyone deploying AI agents productively bears full responsibility: for reviews, for tests, for rollback mechanisms, and for clearly limited permissions. The decisive question is no longer “Can the AI do this?” but: “Is the AI allowed to do this — and who bears the consequences?”
Autonomous cars will eventually get by without a human driver, because statistics and regulation make that possible. Software developers, on the other hand, will have to bear responsibility for their systems for a very long time to come — AI or no AI. Not because the tools are too poor. But because nobody else is willing to put their hand in the fire for it.