Disclaimer: Everything below is my personal opinion. It does not represent the views of any company I have worked for or currently work for. This is my own perspective on how companies should think about their design in a world where access to AI is becoming widespread.
Right now, almost every company has some kind of AI story.
Some are building internal chatbots. Some are automating support. Some are running agent pilots. Some hope it will speed up engineering. All of that makes sense. The barrier to entry has dropped. Tools are more accessible. Market pressure is real. Nobody wants to look like the company that slept through the next technology cycle.
But I keep feeling like the attention is going in the wrong direction.
The question being asked is where to plug AI in. The more important question is different: what should the company itself look like if we take seriously the idea that AI will be part of everyday work?
At that point, this is no longer a tooling question. It is a company design question.
And that, to me, is where things get interesting.
The main mistake I keep seeing
The default move is always roughly the same. Take the existing structure, the existing teams, the existing processes — and lay AI on top as a new acceleration layer. Sometimes it works. But almost always only locally.
Document writing gets faster. The first draft of code gets faster. Ticket triage gets faster. Search across internal knowledge bases gets faster.
Fine. Useful. But that is not the kind of change that actually moves the speed of the company.
Because underneath, the company stays the same. The same long approval chains. The same hard walls between functions. The same fragmented data. The same expensive change cycles. The same outdated ownership models. Teams end up owning a piece of work, but not a real outcome.
In that setup, AI becomes another useful layer sitting inside the old one.
That is the core mistake. The old organizational design gets modernized instead of questioned.
AI does not fix a weak organization
There is a dangerous illusion that AI, by itself, will make companies smarter, faster, and more adaptive.
I don’t buy that.
If a company has poor data structure, if teams interpret core entities differently, if metrics conflict, if ownership is vague, if the change cycle is heavy and expensive, if support is disconnected from product — AI will simply accelerate all of that.
This is worth saying plainly: AI is equally good at accelerating order and chaos.
So I would frame the competitive question this way. The next layer of competition will not be defined only by who has access to the best models. Access to comparable capabilities will be widespread. The real difference will come from how well a company redesigns itself around those capabilities.
As part of how the company actually works — embedded in the operating model, owned by the teams doing the work.
What is actually changing
I think we are entering a period where the nature of intellectual work inside companies will start shifting pretty fast.
More and more written work will be produced by AI. And not only for humans — for other AI systems too. Specifications, reports, summaries, explanations, requirements, analyses — a lot of this will be generated, reworked, and consumed in machine-readable form.
Software development will change too. I don’t think code disappears, and I don’t think engineers become irrelevant. But the center of gravity will shift. Less value in manually writing another layer of code. More value in defining intent, constraints, rules, architectural boundaries, and quality criteria. Less “build it by hand,” more “clearly state what should be built, under what rules, and how we know the result is good.”
Analytical work will shift as well. First-pass analysis, anomaly detection, explanation of deviations, context gathering, scenario comparison — all of this gets cheaper. Not free, but cheap enough that the traditional division of labor between analysts, engineers, scientists, and product managers starts to loosen.
And that, I think, is one of the most underestimated parts of this whole transition.
The boundaries between roles will become much less rigid
Roles stay. But they become far more permeable.
Engineers will move deeper into product framing because the cost of entering that space drops. Analysts will do work that used to require a separate research function. Product managers will go further into technical and analytical territory than before. Scientists will influence product decisions more directly, not just models and experiments.
This is a story about the cost of entering adjacent intellectual territory dropping significantly — and “everyone doing everything” is too simplistic a way to put it.
And when that happens, organizations have to be designed differently.
Around domains, outcomes, and the real ability to move a system forward — rather than titles or neat functional boxes.
Teams are still being designed as if the main scarcity is hands-on execution inside narrow roles. In an AI-shaped world, the scarcity shifts. What becomes hard to find is not just execution — it is the ability to hold a domain, understand context, write strong specifications, make trade-offs, and manage a combined system of people and agents.
The compact product unit as the real building block
From an organizational design perspective, the direction of travel is toward smaller, more autonomous units that own not just a function, but an actual business contour.
Not just an analytics team. Not just an engineering team. Not just a product team.
A unit that owns a metric, a feature area, a domain, or a meaningful business outcome end to end.
This matters for one simple reason: AI creates the most value where responsibility is clear and outcomes are measurable — in well-defined areas of ownership, rather than abstract centralized pilots.
In that kind of setup, product leadership likely becomes less of a title and more of a function. Leadership belongs to whoever can hold direction, define intent, understand the customer, navigate trade-offs, and move the domain forward. That person may have different titles in different organizations. The title matters less and less.
What matters is whether someone can truly own a domain where part of the work is done by people and part of it gradually shifts to agents.
But autonomy without a shared foundation falls apart fast
There is a subtle trap here. When people talk about small autonomous teams, they often assume you just need to give teams more freedom. In practice, that almost never works without a strong common foundation underneath.
An AI-native organization needs more than autonomy. It needs a shared structural backbone. I see at least a few things without which everything starts drifting fast.
First: a shared language
You can call this a semantic layer, but it is really about more than data. It is about a shared language through which the company can explain itself to itself.
Which entities are canonical. Which metrics are official. How they are calculated. Which rules are binding. Which constraints cannot be violated. How events are described. How systems, processes, and definitions connect to each other.
Without that language, AI doesn’t begin to understand the business. It begins to scale the ambiguity of the business.
This is still widely underestimated. Until a company has a coherent conceptual layer, no model will give it real acceleration. There will be demos. There will be isolated gains. But there will not be systemic leverage.
Because in that situation, AI is operating on the same fragmented context that people inside the company are already struggling with.
Second: infrastructure that makes change cheap
Infrastructure often gets framed in terms of reliability, deployment, access, security. All of that matters. But from the perspective of a CEO, VP, or Director, the more important question is: how expensive is change inside this company?
If updating logic, changing an agent’s behavior, redefining a workflow, revising rules, or safely testing a new version still requires a heavy engineering cycle — the organization has not actually adapted.
Strong future infrastructure is not just infrastructure that runs reliably. It is infrastructure that dramatically lowers the cost of change.
And this is where the role of engineering shifts in a major way. Engineers are not only building product features. They are increasingly building the environment in which product teams can safely change the product themselves. Not chaotically, not through workarounds, but within a controlled operating framework.
That is the real platform layer. Not the one that looks good on a slide. The one that actually reduces the cost of movement.
Third: an environment where teams can build agentic workflows without turning the company into a mess
I don’t believe a company can sustain AI transformation without a managed environment for building and running agents.
Otherwise the usual thing happens. Every team starts stitching together its own little agents and automations. Some are useful. Some run on enthusiasm. Some get abandoned. Some rely on data nobody really understands. Some have no clear owner, no observability, no serious quality control.
That is a new version of organizational shadow IT — and it scales fast.
If a company is serious about AI, it needs to give teams the ability to build agentic workflows in a proper environment — where these systems can be launched, measured, updated, constrained, tested, escalated to humans, rolled back, and managed as part of the company’s operating system.
And when I say agentic workflows, I don’t mean demo-friendly AI tricks. I mean real working loops around a domain: support, triage, diagnostics, root-cause analysis, reporting, explaining metric changes, drafting specifications, internal research, operational routines.
That is when AI stops being a helpful assistant and starts becoming a working layer inside the company.
Product support is still underestimated
One of the weakest points in current corporate AI thinking is that almost all the energy goes into accelerating development, while support barely gets a serious thought.
And support is often where the truth about the product actually lives.
That is where real bugs appear. That is where real failures surface. That is where edge cases show up that nobody anticipated. That is where the product collides with actual customer expectations.
If support keeps living outside the product loop as a secondary service function, the company loses a lot.
I would go further: in an AI-native company, the support loop should be part of the product loop — not something attached to it.
That means the team owns not only how it builds new things, but also how its domain is diagnosed, explained, supported, and improved in operation. Agents are especially useful here as the first intelligent layer around the domain, alongside people rather than replacing them.
If that is done well, support stops being a place where people just clean up problems. It becomes one of the main sources of product knowledge.
Not everything should be automated
Worth stating directly, because otherwise any conversation about AI slips into naive enthusiasm.
The goal is intelligent distribution of autonomy. Total automation is the wrong frame entirely.
Some processes are excellent candidates for agentic execution. Some are better served by AI as a co-pilot. Some have too high a cost of failure. Some simply don’t have enough data or contextual maturity yet.
Real maturity shows up not in how much gets pushed into AI — but in how clearly the company understands where AI genuinely strengthens it, and where it only creates the illusion of progress.
There is another issue people still don’t talk about enough: decision rights
I think one of the biggest questions ahead has nothing to do with model capability.
It is about authority.
An agent may be capable of drafting a response. But is it allowed to send it to a customer directly? An agent may be capable of proposing a change. But can it ship that change to production on its own? An agent may be capable of interpreting a metric. But who is accountable if that interpretation is wrong? An agent may be capable of handling a ticket. But who owns the consequence of failure?
The real bottleneck extends beyond the technological. It will be about decision rights, autonomy boundaries, escalation rules, and trust architecture.
That is a management topic.
The boundary has to be defined much more clearly: what agents can do on their own, what they can only recommend, and where a human must remain the final decision point.
If an agent has no owner, it is not part of the system
This one should be stated without softening it. If an agentic workflow has no owner, it is not part of the operating model. It is just automation dropped into the company.
Every such workflow needs someone who is genuinely accountable for it. Someone who watches quality. Someone who notices degradation. Someone who updates the rules. Someone who understands where it works and where it should not be used. Someone who owns the consequence of failure.
Otherwise, the moment something breaks, you get the familiar pattern: everybody used it, nobody owned it.
Agents are managed company assets. They need a purpose, constraints, metrics, lifecycle, control logic, and clear accountability.
That is the difference between a model that scales and one that collapses into noise.
And yes — there has to be a real fallback to humans
The more autonomy an agent gets, the more important it becomes not just to define what it can do, but how quickly the system can return control to a human.
That matters especially in ambiguous, high-stakes, high-conflict situations — anywhere the cost of error is real.
The maturity of an AI-native organization will show up in one simple thing: not just in the happy path of the agent, but in how cleanly the company handles the moment when the agent needs to step back and a human needs to step in.
If that transition is not designed properly, no elegant agent architecture will ever be truly reliable.
What this means for leadership
I am not going to end this with a polished executive summary. That is exactly the kind of ending that kills the feeling that a real person wrote this.
But the implication for leadership seems pretty direct to me.
Stop treating AI as a collection of pilots, use cases, and technology initiatives. That frame is too narrow. What is actually happening is a redesign of the operating model.
If the goal is more than cosmetic efficiency gains, some fairly basic things will have to be rethought. How ownership works. How teams are shaped. Where the line sits between development and support. What counts as official knowledge inside the company. How expensive change is. Where agents have decision rights and where they don’t. Who owns them. How they are evaluated. How they are stopped. How humans re-enter the loop.
That is no longer a technology adoption conversation. It is a company redesign conversation.
What I actually believe
The winners over the next few years will be the companies that redesign themselves earlier for a world where intellectual work is no longer exclusively human.
That means engineers building not only products, but the environments in which product teams can move fast. Product teams becoming smaller, but more accountable. Support and development converging into one loop. Specification becoming more important than parts of manual execution. Roles staying, but becoming much less rigid. A shared language of data, rules, and metrics becoming a real strategic asset. Managing agents becoming as normal a leadership responsibility as managing teams, roadmaps, and budgets.
And maybe most importantly: companies will have to learn how to design systems where people and agents don’t just coexist — but actually work together inside a structure of clear authority, shared logic, real accountability, and operational control.
Because the question is no longer whether AI will be used inside the company. Of course it will.
The question is whether the company is willing to change itself deeply enough so that AI doesn’t remain just another layer placed on top of an old design.