The Confidence Gap in Operational Resilience

Most organisations today would say they take operational resilience seriously.

They have business impact assessments. They have impact tolerances. They have dependency maps, recovery plans, exercise programmes, and escalation paths.

In regulated sectors, many have spent years responding to regulatory requirements and internal governance. From the outside (and often from the inside), this creates a reasonable sense of confidence.

And yet, disruptive incidents continue to surprise organisations that believed they were prepared.

Services assessed as resilient fail in unexpected ways. Recoveries take longer than anticipated. Workarounds that looked robust on paper collapse under real pressure. Senior leaders ask how this could have happened when the organisation had “done what was asked of it”.

The gap between confidence and outcome is becoming harder to ignore.

What is the confidence gap?

The confidence gap is the distance between an organisation’s declared resilience and its demonstrated capability under real-world stress.

It is not about whether frameworks exist or documentation is complete. It is about whether the organisation can actually function when assumptions break down.

Confidence grows through evidence: completed BIAs, approved plans, and successful tabletop exercises conducted under controlled settings.

Capability, by contrast, is revealed through stress: degraded technology, absent people, stretched suppliers, and ambiguous decision-making.

Confidence can be evidenced. Capability is always conditional.

The problem is that these two are often measured very differently – and rarely reconciled.

Why the gap persists

The confidence gap does not exist because organisations are careless or complacent. In many cases, it exists because they are doing the right things – but within systems that quietly reward reassurance over exposure.

Several recurring patterns can be identified.

Assurance rewards completeness, not realism

Most assurance processes are designed to confirm that required elements exist: plans are in place, roles are defined, and risks are logged. They are far less effective at testing whether those elements still work once conditions start to degrade.

Crisis escalation and notification arrangements are a common example. On paper – and when tested – they are usually complete and well-structured. Under degraded conditions, however, they are often fragile, delayed, or inconsistently applied.

Documented completeness is often mistaken for operational readiness.

Testing validates plans, not decisions

Exercises frequently confirm that documented procedures can be followed under controlled conditions, with roles assigned, information staged, and outcomes largely anticipated. They rarely explore how decisions are made when information is limited or incomplete, tools are unavailable, or trade-offs are uncomfortable.

As a result, confidence grows without exposure to the limits of the system.

Success reinforces belief

Long periods without disruption can strengthen confidence rather than weaken it. When incidents do not occur, assumptions remain unchallenged. When small incidents are resolved, they are interpreted as proof that the system works, even if success depended on informal workarounds or individual heroics.

Over time, confidence becomes self-reinforcing, while fragility remains largely unseen.

Where confidence is most often misplaced

The confidence gap is rarely consistent. In practice, it is most often visible in a small number of familiar areas:

  • People dependency: reliance on key individuals whose availability, decision-making capacity, or institutional knowledge has never been stress-tested.

  • Technology recoverability: systems that are technically recoverable in isolation, but fragile when multiple components fail together.

  • Third-party reliance: confidence based on contractual assurances rather than demonstrated resilience.

  • Executive response: belief that leadership experience alone will compensate for loss of information, connectivity, or coordination.

None of these are obvious during normal operations. All of them become decisive under disruption.

Why more assurance doesn’t close the gap

A common response to surprise is to add more:

  • More documentation

  • More controls

  • More reporting

  • More monitoring

  • More testing

But confidence is not restored by reassurance alone.

In some cases, additional assurance activity can actually widen the gap by increasing the appearance of preparedness without increasing usable capability.

How the gap behaves

The confidence gap is not static. It expands and contracts over time, often in ways that are not immediately visible.

In many organisations, confidence stabilises faster than capability improves. After a successful exercise or a period without disruption, confidence increases quickly. Capability, however, tends to improve more slowly, and often unevenly, as underlying constraints, dependencies, and assumptions remain only partially explored.

This creates a form of false convergence, where the organisation feels increasingly prepared while the conditions required for that preparedness have not materially changed.

The pattern is familiar:

  • Repeated successful exercises reinforce belief in readiness

  • Lack of disruption is interpreted as evidence of robustness

  • Informal workarounds are absorbed into “how things work” rather than treated as risk signals

When disruption does occur, confidence can collapse abruptly as those hidden conditions are exposed.

The gap was not created at the point of failure. It had been widening quietly over time.

The issue is not effort. It is alignment between what creates confidence and what actually sustains performance under stress.

How the gap becomes visible

The confidence gap rarely presents itself directly. It is usually sensed through indirect signals.

These signals often appear in exercises, incidents, and near-misses, but are easy to overlook because they do not always prevent recovery.

They include patterns such as:

  • Recovery that succeeds, but only through significant human intervention or undocumented knowledge

  • Increasing coordination effort required to achieve the same outcome

  • Assumptions that are relied upon but not explicitly validated

  • Outcomes that are achieved within tolerance, but with unexpected friction or delay

  • Exercises that confirm expected behaviour but generate little new learning

Individually, these may not appear significant. Collectively, they point to a system where capability is becoming conditional rather than inherent.

In these situations, confidence is often built on outcomes, while capability is dependent on conditions that are not fully understood.

A different question

The confidence gap cannot be closed by asking whether resilience activities have been completed.

Perhaps the more useful question becomes:

Where might our confidence be outpacing our capability – and how would we recognise it before failure forces the answer?


That question is uncomfortable because it challenges reassurance rather than reinforcing it. But it is also where learning begins.

Why this matters now

As operational resilience continues to mature, the challenge is not a lack of frameworks or effort. It is understanding whether confidence still reflects the real condition of a service and its dependencies.

Recognising the confidence gap does not require abandoning assurance. It requires greater attention to how confidence is formed, how it is reinforced, and how it behaves when systems are pushed beyond the conditions in which that confidence was built.

It also requires noticing the signals that appear before failure — the points where recovery becomes conditional, coordination becomes harder, and assumptions begin to surface.

These are not signs that resilience is absent. They are signs that confidence and capability may no longer be aligned.

Previous
Previous

When Availability Isn’t the Risk

Next
Next

Service Fragility and the Limits of Confidence in Operational Resilience