The Confidence Gap in Operational Resilience
Most organisations today would say they take operational resilience seriously.
They have business impact assessments. They have impact tolerances. They have dependency maps, recovery plans, exercise programmes, and escalation paths.
In regulated sectors, many have spent years responding to regulatory requirements and internal governance. From the outside (and often from the inside), this creates a reasonable sense of confidence.
And yet, disruptive incidents continue to surprise organisations that believed they were prepared.
Services assessed as resilient fail in unexpected ways. Recoveries take longer than anticipated. Workarounds that looked robust on paper collapse under real pressure. Senior leaders ask how this could have happened when the organisation had “done what was asked of it”.
The gap between confidence and outcome is becoming harder to ignore.
What is the confidence gap?
The confidence gap is the distance between an organisation’s declared resilience and its demonstrated capability under real-world stress.
It is not about whether frameworks exist or documentation is complete. It is about whether the organisation can actually function when assumptions break down.
Confidence grows through evidence: completed BIAs, approved plans, and successful tabletop exercises conducted under controlled settings.
Capability, by contrast, is revealed through stress: degraded technology, absent people, stretched suppliers, and ambiguous decision-making.
The problem is that these two are often measured very differently – and rarely reconciled.
Why the gap persists
The confidence gap does not exist because organisations are careless or complacent. In many cases, it exists because they are doing the right things – but within systems that quietly reward reassurance over exposure.
Several recurring patterns can be identified.
Assurance rewards completeness, not realism
Most assurance processes are designed to confirm that required elements exist: plans are in place, roles are defined, and risks are logged. They are far less effective at testing whether those elements still work once conditions start to degrade.
Crisis escalation and notification arrangements are a common example. On paper – and when tested – they are usually complete and well-structured. Under degraded conditions, however, they are often fragile, delayed, or inconsistently applied.
Documented completeness is often mistaken for operational readiness.
Testing validates plans, not decisions
Exercises frequently confirm that documented procedures can be followed under controlled conditions, with roles assigned, information staged, and outcomes largely anticipated. They rarely explore how decisions are made when information is limited or incomplete, tools are unavailable, or trade-offs are uncomfortable.
As a result, confidence grows without exposure to the limits of the system.
Success reinforces belief
Long periods without disruption can strengthen confidence rather than weaken it. When incidents do not occur, assumptions remain unchallenged. When small incidents are resolved, they are interpreted as proof that the system works, even if success depended on informal workarounds or individual heroics.
Over time, confidence becomes self-reinforcing, while fragility remains largely unseen.
Where confidence is most often misplaced
The confidence gap is rarely consistent. In practice, it is most often visible in a small number of familiar areas:
People dependency: reliance on key individuals whose availability, decision-making capacity, or institutional knowledge has never been stress-tested.
Technology recoverability: systems that are technically recoverable in isolation, but fragile when multiple components fail together.
Third-party reliance: confidence based on contractual assurances rather than demonstrated resilience.
Executive response: belief that leadership experience alone will compensate for loss of information, connectivity, or coordination.
None of these are obvious during normal operations. All of them become decisive under disruption.
Why more assurance doesn’t close the gap
A common response to surprise is to add more:
More documentation
More controls
More reporting
More monitoring
More testing
But confidence is not restored by reassurance alone.
In some cases, additional assurance activity can actually widen the gap by increasing the appearance of preparedness without increasing usable capability.
The issue is not effort. It is alignment between what creates confidence and what actually sustains performance under stress.
A different question
The confidence gap cannot be closed by asking whether resilience activities have been completed.
Perhaps the more useful question becomes:
Where might our confidence be outpacing our capability – and how would we recognise it before failure forces the answer?
That question is uncomfortable because it challenges reassurance rather than reinforcing it. But it is also where learning begins.
Why this matters now
As operational resilience continues to mature, the challenge is not a lack of frameworks or effort. It is understanding whether confidence still reflects the real condition of a service and its dependencies.
Recognising the confidence gap does not require abandoning assurance. It requires greater attention to how confidence is formed, how it is reinforced, and how it behaves when systems are pushed beyond the conditions in which that confidence was built.