Anyone Can Be Anything (Except Accountable)
Your brilliant, amnesiac coworker doesn't hold the bag — you do.
Your brilliant, amnesiac coworker doesn't hold the bag — you do.
There are two claims about AI that keep showing up in the same conversation, often from the same person, often in the same breath.
Claim one — anyone can be anything now. The idea of democratized expertise. Anyone can build a product. Anyone can launch a company. Anyone can write a novel, an app, a marketing campaign, a research report. The tools have collapsed the distance between idea and artifact to nearly zero. The thing that took a small team a quarter takes one person a weekend. The democratization is real, and it is everywhere.
Claim two — you're always going to need a software engineer. The case for irreducible expertise. Or a lawyer. Or a doctor. Or whoever the real practitioner is for whatever domain just got democratized. The model gets you 80% of the way there. The last 20% is where everything that matters lives — the part where the code has to run in production, the contract has to hold up, the diagnosis has to be right. You need someone accountable for that part. The protectionism is also real, and also everywhere.
These two claims sound like a paradox. They aren't. They're describing the same situation from two different roles. The interesting thing isn't which one is right. (For the dark read: which one becomes irrelevant first.) It's what falls out when you take them both seriously at the same time.
The first sentence is talking about the work.
The second sentence is talking about the ownership of the work.
Those are not the same thing, and the entire confusion of this moment lives in the gap between them.
Anyone can be anything if "be anything" means produce work in any domain. That's genuinely true now in a way it wasn't three years ago. A non-engineer can produce code. A non-lawyer can produce a contract. A non-designer can produce a brand. The output of a domain — the artifact itself — is no longer locked behind years of training. It is, at minimum, gettable. Often shippable. Sometimes shippable to paying customers.
But owning that work is a different act. It involves a different set of muscles. Delegation, in its full form, has two pieces — the work itself, and the ownership that goes with it. And the thing nobody quite says out loud — what both camps in the debate quietly know — is that the second piece doesn't travel. You cannot delegate ownership.
Not to a contractor. Not to an agency. Not to AI. Not to anyone.
You can delegate the work. You can delegate decisions inside the work. You can delegate the artifact production almost entirely. But the moment something breaks in production at 2 a.m., the moment the contract gets challenged, the moment the diagnosis turns out to be wrong — the person who is accountable is the person whose name is on the thing. The work moved. The ownership did not.
You can delegate the work. You cannot delegate the ownership.
This is so obvious it sounds like a banality, until you watch what happens when someone forgets it.
I've argued elsewhere that the most useful posture toward AI is to treat it like a direct report, not like a tool. To brief it the way you'd brief a person, hand off the work, check what came back, redirect when it drifts. The frame is right. The shift from typing at a search bar to delegating to a team is the single largest unlock in personal AI fluency, and it isn't close.
The CEO frame works because delegation is real. AI is genuinely doing the work. It's compressing your time the way an organization compresses a CEO's time — what would take a quarter alone takes a week. AI runs the same play on a faster clock: what would take a week takes minutes.
But the CEO frame contains a trap that's easy to walk into and difficult to see while you're inside it.
The trap is that most people, when they first start operating in CEO mode with AI, also delegate the ownership.
It happens implicitly. You delegate the code, and somewhere along the way you start treating the code as the AI's code. You delegate the email draft, and somewhere along the way the words coming out of your mouth in the meeting are words you didn't read carefully. You delegate the analysis, and somewhere along the way the recommendation you're making to your client is a recommendation you can't defend if they ask you the obvious follow-up question.
The CEO frame doesn't fail at the level of did the work get done. It fails at the level of whose work is it, really. The output exists. The accountability has quietly vanished, because nobody picked it up on the other end.
A real CEO doesn't have this problem, because the people they delegate to also own their piece. The engineer who writes the code owns the code. The lawyer who drafts the contract owns the contract. The reviewer who signs off on the diagnosis owns the sign-off. Accountability gets distributed, but it never disappears from the system. Every node in a healthy org chart is held by a person who would, if pressed, defend the decisions inside their node.
AI isn't a node. AI compresses your time, beautifully — but that's only the first piece of the delegation act. The ownership doesn't pick up at the other end, because there's no other end to pick it up. The Dory frame from the short-term memory piece is the structural reason — your brilliant coworker resets between sessions, and the only thing in the system that doesn't reset is you. The persistence of accountability is, almost by definition, the part of the work the amnesiac can't take from you. Whatever survives between sessions is yours. Ownership is the thing that has to survive every session.
The CEO frame is right. It's also incomplete. The complete sentence is: treat it like a direct report — and keep the ownership in your hands, because the report doesn't have hands to hold it.
It's worth saying what ownership actually consists of, because once you can see it, the trap becomes much harder to fall into.
Ownership is the part you can't outsource even when you outsource everything else. Concretely, in the AI-delegation case, it's at least these four things:
Taste. Knowing, when the draft comes back, what's good and what's not. Not just whether it solved the surface ask, but whether it solved the right ask, in the right register, in a way you'd be willing to put your name on. AI can produce a coherent draft of almost anything. It cannot tell you whether that coherent draft is the right one for your audience and your moment. That call is yours.
Judgment about scope. Knowing what's in the work and what isn't. Knowing when the brief was wrong — when what you asked for is no longer what you should be doing. AI will execute the brief beautifully. It will not, usually, refuse the brief on grounds that the brief itself is the problem. That refusal is yours.
Standing behind it when it breaks. Being the person who answers the email when the customer is upset, the deposition when the contract is challenged, the postmortem when the system goes down. The person whose name is on the thing is the person who is on the hook when the thing fails. AI's name is never on the thing. Yours is.
Caring enough to verify. The willingness to do the unglamorous reading-back: trace what was actually shipped, check the load-bearing claims, read the diff. Ownership requires the small daily act of closing the loop — confirming that what you said you'd produce is what got produced. The CEO who never reads the work their team produces is not, in any meaningful sense, the CEO of that work. They're a routing layer for it.
These four together are what makes you the owner. Strip any one of them out — especially the last two — and ownership has, quietly, leaked out of the system. The artifact still exists. Nobody owns it.
If you've been around any AI-built software in the last year, you've seen this shape. A founder who shipped a working prototype that they can no longer fix. A consultant who delivered a deliverable they can no longer defend in a follow-up call. A team that pushed a feature whose details no one on the team understands. The work moved. The ownership leaked. The bill comes due later, sometimes much later, but it comes due.
Here's where the protectionist sentence — you'll always need a software engineer — earns its keep, but also where it earns its expiration date.
Today, in 2026, there is a real floor on what an AI-delegating non-specialist can fully own in production. That floor is mostly about the last two ownership components. The first two — taste and scope judgment — travel with the human. A smart product person has product taste regardless of whether they can write code. A sharp legal mind has legal judgment regardless of whether they personally typed the contract.
The second two — standing behind it when it breaks and caring enough to verify — are where domain depth still does load-bearing work. Standing behind production software at 2 a.m. requires being able to read the logs and form a real hypothesis about what failed. Caring enough to verify requires being able to do the verification — to read the code, to catch the bug the model didn't realize was a bug, to distinguish between a contract that looks correct and one that holds up under adversarial reading. That part still requires the practitioner. The CEO-mode non-engineer can ship the feature. They can't, today, fully own it through its life.
This is what the protectionist sentence is pointing at, even when the person saying it can't articulate it. They're not wrong. They're noticing that the ownership floor hasn't moved as fast as the work floor.
And here's the part that makes the moment interesting: the ownership floor is also moving. Not as fast as the work floor, but in the same direction.
The agentic tooling that lets you produce the work is also, gradually, getting good enough to support the verification of the work. Better tests, better self-checks, better fault detection, better forensic tooling, better-instrumented systems that surface what broke and why. Each generation chips away at the depth required to credibly hold the standing behind it and caring enough to verify sides of ownership. A non-engineer can't credibly own production software end-to-end today. The honest read is that the line where they can is moving toward them faster than most engineers want to admit and slower than most non-engineers want to believe.
This is the same dynamic I named in the fluency essay — every level rising; the ceiling moving down. But measured in ownership rather than capability. The ceiling that's moving down isn't what you can produce. It's what you can be the accountable person for. The Conversationalist citizen-builder will soon own, end-to-end, things that only the Native Speaker engineer can responsibly deploy today. That's the trajectory. The current floor is real. It is also temporary.
There's a particular kind of pushback I keep hearing from software engineers I respect — people whose craft I admire, some of whom I've worked beside for years. The pushback goes like this: I tried AI on a real task last week. It got it 80% right, but the last 20% was wrong in subtle ways. I had to undo it all and rewrite it by hand. It would've been faster to just do it myself. The tool doesn't work for serious engineering work.
I want to be direct here, because I think the version of this story that gets repeated in engineering culture is wrong in a way that's actively costing the people telling it.
Underneath the story is usually a real feeling — fear of obsolescence, fear of identity loss, fear of the craft atrophying if the keys aren't in your hands anymore. Who am I if I'm not the one writing the code? I don't think those fears are stupid. I think they're the honest reason a smart, experienced person looks at a tool that would compress their work by ten or a hundred times and finds a way to dismiss it after one try. The dismissal is the easier path. The fear underneath it is the harder thing to look at.
But the story itself — I tried it, it didn't work, I had to redo it by hand — is almost always a story about using the tool wrong.
If you used AI as a fancier autocomplete and got mediocre results, you discovered something true: AI is a mediocre autocomplete. Autocomplete and thought-finishing aren't where the leverage is. They aren't even where the interesting part is. Using AI that way is like buying a forklift and using it to lift one box at a time by hand. The forklift didn't fail. The frame did.
The 10x-to-100x shift — and it really is that big for some kinds of work — comes from one specific move: getting your hands out of the codebase and learning to delegate. Brief AI on the change the way you'd brief a sharp junior engineer. Hand off the work. Read the output critically. Redirect when it drifts. Verify the load-bearing parts yourself. That's the loop. It feels strange the first time. It feels strange the tenth time. By the hundredth time, you're doing things in an afternoon that used to take a week, and your hands aren't getting calluses from typing every character.
The objection that lives underneath this — but if I'm not typing the code, am I still an engineer? — is the real question. And I'll say what I actually think: the engineer who deeply understands the work and knows how to delegate is a powerhouse. The engineer who only understands is becoming a smaller and smaller version of the role. Not because their craft got worse. Because the ceiling moved.
There's a case where "AI doesn't work for serious engineering" is honestly true in 2026, for specific work — heavily distributed systems with weird coupling, codebases where verification cycles are days long, domains where the conventions are so particular that a model can't carry them across sessions yet. The engineer who works on those things and reports back honestly isn't who I'm pushing on. The engineer I'm pushing on is the one who tried autocomplete-with-extra-steps once, hit the wall every tool hits when used that way, and concluded that the wall was the tool.
It wasn't.
Actually — wait. Let me walk back the carve-out I just gave you, because I don't really mean it. The right workflows and harness can absolutely help with large distributed architecture problems. The engineering work doesn't disappear at that scale; it shifts. It becomes context management, memory management, and shaping the problem so AI can comprehend the surface area and iterate on it. You move from writing the code to engineering the conditions under which the code gets written. That's especially true on brownfield systems — the ones built on top of something that already exists — where the institutional knowledge AI needs is sitting in your head, and the move is to externalize it deliberately, not to give up.
I'm saying this with care, not contempt. I know the people who hold this position. I love some of them. I'm pushing on it because the cost of staying there compounds quickly, and because the version of these engineers who learns to delegate is more powerful than the version that doesn't — not in some abstract competitive sense, but in the specific, observable sense of what they can build in a Saturday. I want that for them. I don't want them to find out in three years that they spent the most important window of this shift defending a strawman.
The delegator who also deeply understands is an enigma right now, because the role doesn't have a clean name yet and the pattern is still settling. That's a feature. The window is open. The engineers who walk through it now are the ones the next decade will be built on.
Up to here I've been making the pragmatic version of the argument. AI can't credibly hold ownership today. The trajectory lets it do more, but the human stays on the hook for the parts that matter most.
I want to make a stronger version now, because I think it's the one that's true.
It's not just that AI can't hold ownership of the work that affects the real world. It's that we should never want it to.
I know how this sounds. The moralizing register isn't usually mine. But I think the stakes here are larger than the productivity argument, and I think they get clearer the more you sit with them.
Owning outcomes in the real world is the thing that keeps us us. It is the muscle that produces taste, judgment, integrity, courage, craft. Every one of those qualities is forged in the act of standing behind something that could fail and would be yours if it did. You don't become a good engineer by writing code an agent could have written. You become a good engineer by being the person who has to fix it when the pager goes off — and growing, painfully and permanently, into someone who can. You don't become a good leader by having decisions made for you. You become one by making them and bearing what comes next. The growth lives inside the ownership. You can't have one without the other.
This is the branch point I think we're actually standing at, and I think the stakes are civilizational, not just personal. I'm going to sound dramatic for a paragraph. I think it's warranted.
The pessimistic branch is the WALL-E future. We delegate the work, then we delegate the ownership, then we delegate the responsibility for caring at all — and the result is exactly what that film makes literal: humans as soft, drifting consumers of outputs we no longer produce, judge, or stand behind. Comfortable. Diminished. Unrecognizable to ourselves in a generation or two. Nobody chose it on purpose. We let it leak out one small delegation at a time, because each one felt convenient.
The optimistic branch is the one where AI becomes the enormous compressor of time and labor it's already starting to be, and we use that compression to take on more of what only we can do. More ownership, not less. Bigger outcomes we can stand behind. More of the kind of work that grows the people doing it. The compression frees us from the work that was crowding out the work that mattered. We get closer to something like gods of our own attention — not because the tools made us so, but because we used the tools to do more of the thing that makes us most ourselves.
Which branch we end up on isn't going to be decided by the tools. The tools are agnostic. It's going to be decided by what we choose to keep holding. Ownership is the invitation to remain ourselves. The pragmatic argument and the moral one converge at exactly this point: the part of the work AI can't credibly carry is also the part that, if we let it go, we lose something of ourselves we cannot get back.
This is why you can delegate the work but not the ownership isn't just a piece of operational advice. It's the principle for how a person stays a person — and how a species stays a species — while the tools keep getting better.
Here's the practical synthesis — useful, not just descriptive.
If you're tempted by "anyone can be anything": you're right about the work. Be honest about the ownership. Pick the domains where you can credibly hold all four pieces — taste, scope judgment, standing behind it, caring to verify — and lean in. Don't pick domains where the last two require depth you don't have and aren't building. Or, if you do pick those domains, partner with someone who holds the depth, and split the ownership explicitly. Don't pretend the depth isn't required just because the artifact came out clean. The artifact coming out clean is the easy part now.
If you're tempted by "you'll always need a software engineer": you're right about today. Be honest about the trajectory. The floor that protects the role is real and it is also moving. The defensible version of your role isn't the only one who can produce the work. It's the one who can credibly hold ownership when it matters most. Lean into the ownership skills — the verification, the judgment, the standing behind it under pressure. Those are the pieces that travel even as the production work commoditizes. They're also, not coincidentally, the pieces AI is least equipped to take over.
If you're somewhere in the middle, watching this play out: the role you pick determines the leverage you get long before fluency does. CEO mode is the right posture. Treat the work as delegate-able. Treat the ownership as non-negotiable, yours, and the thing that determines what you can credibly ship. The mistake to avoid in either direction is the same mistake: confusing the production of the work with the ownership of it.
The work moved. The ownership did not. The whole game is figuring out what to do about that.
The brilliant amnesiac at the other end of the chat is doing more of the work every month. That's the part everyone is shouting about, in both directions. The part nobody is shouting about — and the part that quietly determines who's building durable things versus who's pushing artifacts they can't defend — is who's holding the bag while the work compresses.
Try to hand the bag off when something breaks and the receiving end apologizes — beautifully, with full understanding of what went wrong — and then moves on. Same apology regardless of stakes, whether the failure cost you a weekend or a livelihood. Next session, no memory of the breakage. The bag was always in your hands.
That's the question worth asking when you sit down to use this stuff seriously. Not can the AI do it? It almost certainly can, now or soon. The real question is am I the kind of person who can own what comes out the other side? If yes, the leverage is enormous. If not, the leverage is borrowed, and the bill is coming.
And the bill, if enough of us let it pile up at scale, isn't paid in dollars. It's paid in what we become — or stop becoming — on the way to whichever future we end up choosing without quite noticing we were choosing it.
Hold the ownership. Hold to humanity. The rest is leverage.