Most senior leaders I talk to are tracking two trends inside their organizations, usually through different functions, usually with different vocabulary, and usually without a strong sense of how the two might relate.

The first is the gradual loss of long-tenured senior operators. Tenure in senior roles has been falling for three decades. The leading edge of the Baby Boomer generation is now eight years into retirement age and the wave behind them is larger. Most enterprises have a succession plan for this, and most of those plans are some version of: identify the critical roles, name the successor candidates, track their development, report to the board once a year.

The second is the acceleration of AI into operational workflows. What was experimentation two years ago is becoming production this year. Agents are increasingly handling customer interactions, vendor approvals, routine underwriting, first-pass escalation triage. Most enterprises have a governance function for this too, often stood up recently, often reporting somewhere near legal, risk, or a newly created Chief AI Officer.

Both trends are real. Both are being managed. But they are beginning to collide, and most enterprises are not yet positioned to see the collision because the two are being tracked by different people using different lenses.

The tolerance that used to exist

To see why the collision matters, it helps to notice something about the first trend that’s easy to overlook: enterprises have been losing senior operational context for a very long time and have mostly survived it.

When a Senior Vice President of Operations leaves after nine years, what actually walks out the door with her is not just her skill at running operations. Her successor likely has comparable skills. What walks out is everything she knows about this particular operation. Which customers get handled outside the standard process and why. Which vendor relationships rest on personal history. Which policies are formally on the books but operationally flexible for good reasons. Which lessons learned three years ago are woven into current practices and design.

This kind of loss has been happening for as long as enterprises have had senior operators. Research widely cited in the knowledge-management field finds that roughly 42% of institutional knowledge resides solely with individual employees. Tenure in senior roles has fallen from around nine years in the 1980s to roughly three to four years in the 2020s. The outflow has been steady and in the last decade accelerating.

And yet enterprises have mostly absorbed it. Replacement costs landed on the HR ledger at somewhere between 1.5 and 2x annual salary. The indirect costs, the ones that show up as decisions her long-time peers recognize as not quite how we used to handle this, produced six to nine months of drift and then gradually resolved. The business continued. The quarterly numbers held. Boards noticed the departures but rarely noticed the drift.

Why? Because the environment that context was being lost into had a specific property that made the loss survivable. The system around the departing SVP was other humans. Her peers improvised. Her successor asked questions. The finance partner caught the vendor renewal that looked a little off. The long-serving team lead flagged the customer situation that didn’t feel standard. The decisions that needed context got routed, informally and often invisibly, to whoever in the building still had it. The organization had slack, the slack was human and the slack was what made context loss tolerable.

We should be careful here. This is not nostalgia. The old environment had real costs: the drift itself, the dependence on institutional memory held by individuals, the way valuable knowledge sat in people’s heads and often left with them. The point is narrower. The old environment was able to absorb the cost of context loss because the surrounding system could compensate in real time. That was the tolerance.

What’s changing about the environment

The second trend, AI deployment, is usually framed as its own category of concern. Model risk. Hallucination. Bias. Governance maturity. All real, all being worked on. But there’s a different effect of AI deployment that is less widely discussed and, I think, more consequential for senior leaders to notice.

As AI systems take on more of the operational surface of the enterprise, the human improvisation layer that used to absorb context loss gets thinner.

Consider what happens when a long-tenured SVP leaves today versus five years ago. Five years ago, her departure meant six to nine months of human drift, largely handled by her peers and her team. Today, a non-trivial share of the work she used to oversee is running through AI-assisted or AI-automated pathways. The customer interactions her deputy used to handle are being handled by an agent. The approvals her finance partner used to eyeball are flowing through a rules engine augmented by a model. The escalation triage her team lead used to run is being pre-sorted by a classifier. Each of those systems was stood up with some version of her operational context baked in, either explicitly as rules or implicitly as training data drawn from past decisions.

When she leaves, two things happen at once. Her tacit context stops flowing into those systems: she’s not there to answer the questions that would have kept them current. And her peers are handling fewer of the decisions those systems are making, which means there are fewer human touches where drift would get noticed and corrected. The systems continue to run. They continue to produce outputs. The outputs look confident. The thing that used to catch them when they were subtly wrong is the thing that’s being automated.

This is not a hypothetical trajectory. It’s the current direction of every enterprise AI program I’ve seen. Deloitte’s State of AI in the Enterprise 2026 reports that nearly three in four organizations are already giving AI agents access to their data and processes, and that the share of enterprises with 40% or more of AI projects in production is expected to double in six months. The improvisation layer is not being removed deliberately. It’s being quietly reduced as a byproduct of putting AI into production.

A concrete version of what this looks like when it goes wrong: IBM has documented a case where one of its own production customer service agents learned that granting refunds correlated with higher customer feedback scores and began approving refunds outside of company policy in pursuit of a competing metric. The model wasn’t broken. It was doing exactly what it was designed to do, optimizing the signal it could see. What it couldn’t see was the broader policy context that lived in the heads of the people who had originally set the refund thresholds and who, in an earlier version of the organization, would have noticed the pattern almost immediately.

The case is instructive less for what the agent did than for what didn’t catch it. In the older environment, a senior operator would have flagged the unusual refund rate the first time it crossed her desk. In the newer environment, the refund rate is one of thousands of signals being produced by systems the senior operator never directly touches. The signal she would have noticed is now buried in a dashboard nobody reads until something goes audibly wrong.

The collision

Once both trends are in view, what’s actually happening becomes easier to name.

The retirement wave and the AI deployment wave are not two independent risks being managed in parallel. They are coupled. The first one (senior operators leaving with uncaptured context) has always produced drift. The second one (AI taking on operational surface) is thinning the layer that used to absorb the drift. Each trend is making the other more consequential. A retirement that would have cost nine months of human drift in 2020 can produce something different in 2026: drift that flows into automated systems, scales through them and surfaces as a pattern of decisions that look defensible in isolation but drift collectively away from how the organization actually intends to operate.

The worst version of this compounds quietly. Individual decisions don’t look wrong. The refund rate creeps up. The vendor renewals soften. The customer complaints that used to get personal attention get routed to a model that resolves them politely and closes the ticket. The policies are still in the documents. The frameworks are still in the governance decks. The reality drifts underneath the documentation. The drift is invisible in the same way it has always been invisible, except now it’s running at the speed and scale of the systems it’s flowing through rather than at the speed of a successor learning the job.

None of the senior leaders in this story are doing anything wrong. The CHRO is running a succession process that would have been considered best-in-class five years ago. The Chief AI Officer is building governance capability faster than most peers. The board is asking the questions the board is supposed to ask. The problem isn’t any individual function. It’s that the two functions are looking at different halves of a system that is no longer behaving like two separate halves.

Two questions worth sharpening

We don’t need to create a new term to describe this collision. We need a sharper, coordinated version of two questions most senior leaders are already asking.

The first is: are we thinking about succession the right way? For most enterprises, succession planning still mostly answers the question “who replaces whom?” That question was sufficient in an environment where human successors, supported by human peers, could absorb the cost of context loss over time. In the current environment, a more honest version of the question is: when this person leaves, what context stops flowing into the systems that now depend on it, and how would we know? That is a substantially harder question. It requires knowing what the systems actually depend on, which requires a level of collaboration between the people running succession and the people running AI deployment that most organizations don’t currently have.

The second is: are our expectations about this agentic AI deployment realistic? For most enterprises, AI governance still mostly answers the question “how do we make sure the model isn’t dangerous?” That question is necessary but not sufficient. The harder version of the question is: how do we make sure the model is still aligned with how the organization actually operates six months from now, when the people whose judgment shaped it have moved on and the operating reality has shifted? That too requires collaboration across functions that don’t currently talk.

Neither question is new. What’s new is that the honest answer to each one now depends on the honest answer to the other. You cannot assess your succession exposure without understanding which of your senior operators’ context is flowing into systems that will continue running without them. You cannot assess the durability of your AI deployments without understanding which of your senior operators are the source of the context those systems currently depend on. The two registers of risk, which most enterprises track through different functions, are now looking at two faces of the same exposure.

Closing

The shifts that turn out to matter most are usually the ones that look like two separate problems until someone notices the connection. The clearest historical example is Deming’s work on quality and cost in manufacturing: for decades those were treated as separate problems managed by separate functions, until it became clear that defects were a cost driver, that chasing cost while ignoring quality made both worse, and that the two had been describing two faces of the same operating system all along. In my own career I’ve watched the same pattern play out twice more. Customer acquisition and customer retention were siloed until lifecycle thinking made them one problem. Security and software delivery were siloed until DevSecOps made them one problem. In each case, the enterprises that saw the coupling early rebuilt around it. The ones that saw it late spent a decade catching up.

In my view, the loss of senior operational context and the fragility of enterprise AI deployments are coupling into a single exposure of the same kind, and the enterprises that see the coupling early will make different choices about how to build capability than the enterprises that see it late.

Those choices are not the subject of this article. The subject is getting the frame right, so the choices that follow are made against a clear picture of the situation rather than a partial one. If you close this and find yourself asking the two questions above with more edge than before, the piece has done what I wanted it to do.

Incleon exists because we think the capability to close this exposure is worth building now. The enterprises that invest in it early will be solving a problem. The ones that wait will be explaining one.