Every Employee Gets an AI Team. Now Do the Maths
Like my posts? Found something useful? Saved you some time? Buy me a coffee
Give every employee 5-9 AI agents and the org chart explodes. Here’s what maximum augmentation actually looks like, and why the ceiling is lower than you think.
In the previous post, I argued that the span of control problem applies to AI agents just as much as it applies to people. A single human can effectively oversee somewhere between five and nine direct reports before quality degrades. Swap “direct reports” for “AI agents” and you hit the same wall.
But I stopped short of following that logic to its conclusion. So let’s do that now. Let’s take a typical business hierarchy, give every single employee their own squad of AI agents, and see what the org chart looks like when you’re done.
The numbers are instructive. And a bit terrifying.
The org before agents
Picture a fairly standard mid-size tech company. Nothing exotic. A CEO at the top, a handful of VPs, directors beneath them, engineering managers, and individual contributors doing the actual work. Say 200 people, structured roughly like this:
1 CEO, 5 VPs, 25 Directors, 75 Managers, 94 ICs.
That’s a real org. You could draw it on a whiteboard. Everyone has a reporting line, a rough sense of what their peers are doing, and enough context to make decisions at their level. The whole thing is held together by meetings, Slack threads, and the quiet institutional knowledge that accumulates when humans work alongside each other long enough.
Now add agents.
The org after agents
Give every employee the theoretical maximum: 7 AI agents each (splitting the 5-9 range down the middle). Not aspirational. Not “some day.” Just the upper bound of what span of control research says a single person can meaningfully oversee.
That 200-person company just acquired 1,400 AI agents. The workforce, measured by producing entities, went from 200 to 1,600 overnight. An 8x increase. And not a single new human was hired, not a single new manager, not a single new review process, not a single new architectural governance meeting.
Let that land for a second.
The multiplication problem
Here’s where it gets properly uncomfortable. The agent count doesn’t just add. It multiplies as you climb the hierarchy.
Take one of those VPs. They have 5 directors reporting to them. Each director has 3 managers. Each manager has maybe 4 or 5 ICs. Each IC has 7 agents. The VP is now indirectly responsible for the output of roughly 525 AI agents, funnelled through maybe 20 humans. Every architectural decision those agents make, every dependency they introduce, every shortcut they take in a test suite because the prompt wasn’t specific enough: all of it rolls uphill.
At the CEO level, the maths is simple but striking. One human, sitting at the apex of 1,400 agents, with the same meeting cadence, the same quarterly review cycle, the same 24 hours in a day they had before any of this started.
The org chart didn’t just get wider. It got deeper, denser, and dramatically harder to reason about.
Not all branches grow the same
Here’s the thing nobody’s modelling yet: AI augmentation is wildly uneven across roles.
A senior backend engineer running 7 coding agents might genuinely approach that ceiling. They’ve got well-scoped tasks, clear acceptance criteria, established patterns to follow. The agents write boilerplate, generate tests, scaffold services. The engineer reviews, refines, and makes judgment calls. It works. It’s not magic, but it works.
Now walk over to the legal team. Or HR. Or the office manager. Give them 7 AI agents each and ask yourself honestly: what are those agents doing? Drafting policies? Summarising documents? Maybe. But the leverage ratio is completely different. The nature of the work doesn’t decompose into parallelisable units the same way software engineering does.
So in practice, your org chart doesn’t balloon uniformly. Engineering explodes. Product grows moderately. Finance gets a few helpful copilots. Marketing probably gets the most visible output increase but with the highest quality variance. And the support functions barely change.
The result is a lopsided tree. Some branches are groaning under the weight of agent output. Others look basically the same as they did six months ago. And the humans trying to coordinate across these asymmetric branches are dealing with a communication and context problem that didn’t exist before.
The frozen middle
This is where I think the real pain lives. Not at the top, not at the bottom. In the middle.
Individual contributors get augmented. They’re writing less boilerplate, shipping faster, spending more time on design and review. That’s the promise, and for well-scoped work, it delivers.
Executives get dashboards, summaries, and synthesised reports. They’re making decisions with better information, faster. Good for them.
But the engineering managers and directors caught between those two layers? They’re absorbing the blast radius from both directions. More output flowing up from their augmented ICs means more to review, more to coordinate, more architectural decisions to validate. More demand flowing down from leadership means tighter timelines, higher expectations, and the implicit assumption that “your team has agents now, so this should be faster.”
The middle layer didn’t get 7 agents that help them manage. They got 7 agents that help them produce, while their actual job (the coordination and oversight and judgement calls) got significantly harder. An engineering manager’s agents can draft design docs or write code, sure. But the manager’s real job is maintaining coherence across their team’s output. No agent is doing that yet.
The frozen middle isn’t a new concept. Organisations have talked about it for decades. But AI augmentation is about to make it acute in a way that spreadsheet automation and Jira dashboards never did.
The ceiling is real
So here it is. The maximum organisational throughput increase you can reasonably expect from individual AI augmentation, assuming everyone hits the span of control ceiling, is roughly 8x. That’s not nothing. For many companies, an 8x output multiplier would be transformative.
But it’s also not infinite. It’s not 100x. It’s not “we can fire 80% of the company.” It’s bounded by the same human bottlenecks that have bounded organisational performance since the first person tried to coordinate the work of more than nine others.
And 8x is the theoretical maximum. The practical number is lower. Not everyone’s work decomposes cleanly. Not every role benefits equally. Not every human is equally effective at managing agent output. Some will run 3 agents well. Some will run 7 badly. The distribution matters.
What this doesn’t cover
There’s a conspicuous gap in this analysis, and I’m leaving it open deliberately.
Everything above assumes agents augment individual humans. One person, their agents, their output. The span of control model applied at the node level. But what happens when you move beyond that model entirely? When agents coordinate with other agents? When the hierarchy itself becomes partially automated, with AI systems managing workflows, allocating tasks, and making routing decisions that currently require a human in the loop?
That’s a different article. And it’s a much more interesting one. Because the ceiling I’ve described here, the 5-9 agents per human constraint, is only a ceiling if humans remain the sole coordination layer. If you can build reliable agent-to-agent orchestration with meaningful quality gates, the maths changes completely.
But that’s not where most organisations are today. Today, we’re in the “give everyone an AI assistant and see what happens” phase. And what happens is bounded, uneven, and a lot more complicated than the pitch deck suggests.
The span of control problem hasn’t gone anywhere. We’ve just given it a lot more to control.