Agentic Operating Model · Part 5

AI-native teams: scaling through systems, not headcount

When a company decides to scale, it hires more people.

More analysts to cover more metrics. More engineers to build more features. More support staff to handle more tickets. The underlying logic has always been the same: capacity lives in qualified people. You need more output, you find more talent with the right skills, and you grow the team around them.

That model made sense for a long time. Expertise was scarce, knowledge lived in individuals, and the only way to do more was to bring more of those individuals in.

AI-native teams are built around a different premise.

Capacity still comes from expertise — but expertise can now be encoded, deployed, and scaled through agents. One person with deep domain knowledge can operate at a reach that would have previously required a team. The unit stays small. The output grows.

In the previous part of this series , I argued that the AI-native company will be built from small autonomous units — each owning a metric, a feature area, or a meaningful business outcome end to end. In this part, I want to go one level deeper: what does that unit actually look like from the inside? Who is in it, what do they do, and what kind of thinking do they bring?

Because the real shift lives somewhere the org chart cannot show. It is in the profile of the people the org chart is built around.


The person who holds direction

Every autonomous unit needs someone who owns the answer to a simple question: where is this going, and why does it matter?

That is the product vision role. It might sit with a formal product manager, or with a technical lead who has grown into that function. The title matters less than the responsibility: understanding the customer, holding the direction, making trade-offs, and keeping the domain connected to the broader portfolio it belongs to.

On the surface, this role looks unchanged. And in some ways it is — the core judgment required here has always been hard to systematize and will stay that way. Knowing what the customer actually needs, deciding what to build and what to leave out, navigating organizational context — none of that gets easier with AI.

What changes is the medium through which that judgment gets expressed.

In a team where part of the work is carried out by agents, the product vision role has to translate intent into something a system can act on. Precise specifications: what the desired outcome looks like, what constraints apply, what counts as good enough, and where the system should stop and ask a human.

A product manager defining a triage agent does not write a user story. They write rules: which signals indicate urgency, which cases the agent can resolve autonomously, and which require a human before anything moves. That is system design — and it requires a different kind of thinking than a sprint review.

Specification has always mattered. In an AI-native team, it becomes the primary way product thinking gets operationalized. The quality of what the team builds — including what its agents do — traces back directly to how clearly intent was defined at the start.

That is a different kind of craft than writing PRDs or running sprint reviews. It is closer to system design than traditional product management. And it is one of the clearest ways the product vision role evolves in this new structure.


The expert who manages agents

This is where the structure of an AI-native team diverges most clearly from what we are used to.

Take an analytics function as an example. In a traditional setup, a senior analytics engineer holds deep knowledge of how a metric is calculated, what influences it, where the data breaks down, and how to interpret edge cases. That knowledge is valuable — and scarce. So the default move is to build a team around it: junior analysts, supporting roles, people who help carry the load.

In an AI-native team, that same senior analytics engineer scales through agents.

Senior analytics engineer scaling through agents in an AI-native team

The knowledge stays with one person. The reach expands through systems that person builds, trains, and manages. A single lead can cover the analytical surface area that previously required several people — because the repetitive, well-defined parts of the work are handled by agents operating under their supervision.

This changes what expertise looks like in practice.

The analytics lead in this structure is responsible for the quality of the domain — same as before. But the way that responsibility gets exercised is different. They define the logic. They encode the rules. They set the evaluation criteria. They monitor for degradation. They decide when an agent’s output is reliable enough to act on, and when a human needs to step back in.

The skills required reflect that shift. Deep domain knowledge remains the foundation. On top of it, this profile needs a working understanding of how to actually build and operate agents: how to design prompts that produce consistent outputs, how to structure tools and skills an agent can call, how to define the boundaries of what an agent is allowed to do on its own.

But two areas matter more than anything else at this level of ownership.

The first is evaluation. An agent managing analytical work needs to be right reliably — and “it looked good in testing” is not sufficient. The lead needs to define what good output actually looks like, build eval sets that cover real domain edge cases, and run those evals continuously as the agent evolves. Evaluation is how the lead maintains quality without reviewing every output manually.

The second is telemetry. Agents degrade. Quietly, gradually, and often in ways that are invisible until something breaks. The lead needs to instrument their agents so that drift is visible early — output quality metrics, failure rates, escalation patterns, latency. Telemetry is what turns an agent from a black box into a managed system.

Together, evals and telemetry are what allow one person to run what is effectively a team. The agents do the work. The lead sets the standards, watches the signals, and intervenes when the system needs correction.

Evals and telemetry enabling one expert to manage a team of agents


The platform that makes it possible

Speed inside a product unit depends on one thing that lives outside it.

A team that owns a domain — its metric, its agents, its quality standards — can only move fast if it is not also responsible for building the environment it operates in. Deployment infrastructure, versioning, rollback, observability tooling, eval frameworks — this work is real, and it is expensive. A product team that carries it on top of domain ownership ends up doing neither well.

That is what a platform team is for. It builds the environment once, maintains it properly, and gives product teams a foundation they can rely on without thinking about it. The product unit focuses entirely on the domain. The platform makes that focus possible.

The boundary between them is one of the more important design decisions an AI-native organization has to make — and one of the most commonly avoided ones. It requires the platform team to resist getting pulled into domain problems, and product teams to resist building infrastructure workarounds when the platform moves too slowly. Both failure modes are common. Both are expensive.

Platform investment compounds across every team that uses it. A weak platform does the opposite — it taxes every team equally, and the tax compounds too. The cost of every workaround, every reinvented deployment script, every agent with no proper rollback mechanism adds up quietly — until the organization realizes it has built the same infrastructure ten times, and none of the ten versions is reliable.


What disappears

This is the part that tends to get left out of conversations about AI-native teams. The structure described above has a consequence that is worth naming directly.

The middle layer of expertise shrinks.

People who carried well-defined analytical or operational work — repeatable tasks, standard reports, first-pass investigations, routine triage — occupy exactly the space that agents are most capable of filling. That work does not go away. It gets encoded into systems managed by someone with deeper expertise sitting above it.

This is uncomfortable to say. But designing around it honestly is more useful than pretending the transition is purely additive.

What remains — and what becomes more valuable — is the expertise that cannot be encoded. The judgment that comes from years inside a domain. The ability to recognize when a situation falls outside what the system knows how to handle. The instinct for when output looks plausible but is wrong. These are the things an agent manager draws on constantly, and they are hard to build quickly.

The profile that scales in an AI-native team is the deep expert who can also build and manage systems. The profile that becomes harder to place is the generalist executor who sits between that expert and the work itself.

Deep expert system manager vs generalist executor in AI-native teams

That is a real shift. And most organizations are not yet hiring, developing, or retaining with it in mind.


A different kind of contributor

There is a thread running through every role described in this piece.

The person who holds product vision is thinking about how to translate intent into something a system can act on. The analytics lead is building and managing a team of agents rather than a team of people. Both are operating one level above the work itself — defining how it gets done, setting the standards, watching the outputs, intervening when something breaks.

That is the profile an AI-native team is built around. Someone who designs and manages a system that executes. The work is still getting done. The question is who is responsible for the system doing it.

This is a meaningful shift in what companies need to look for. Domain expertise remains essential — perhaps more than ever, because it is what gets encoded into the system and what catches the system when it fails. But alongside it, the ability to think in systems, write precise specifications, build evals, read telemetry, and manage agents as working assets becomes a core expectation.


Companies have always been shaped by the profiles they built around. The industrial company was built around the skilled operator. The knowledge company was built around the specialist. The AI-native company is being built around something harder to name but fairly specific in practice: the expert who can make their expertise run at scale.

That is the unit the org chart will eventually organize itself around. The teams that figure that out earlier will move differently from the ones that are still trying to hire their way to capacity.