When Resilience Looks Strong on Paper – What the FCA’s Latest Observations Really Reveal

Most organisations subject to the UK Operational Resilience framework have done a significant amount of work to ensure that they are compliant.

Important Business Services have been defined, impact tolerances set, mapping completed, and scenario testing programmes established. Self-assessments have been written, governance structures put in place, and from both an internal and external perspective, there is usually a reasonable sense that resilience has been built.

The FCA’s latest observations, published one year on from the March 2025 implementation deadline, don’t really challenge that effort. They acknowledge progress. They recognise that firms have taken the requirements seriously.

But what they begin to surface, more quietly, is something else.

Not whether resilience has been constructed, but whether it is actually understood.

The direction of travel is clear. What’s less clear is whether it’s keeping pace with the complexity of the systems firms now rely on.

What sits beneath the observations

The paper is structured in a familiar way: Important Business Services, impact tolerances, mapping, scenario testing, response and recovery, governance and embedding. On the surface, the feedback feels measured and expected.

Yet across these areas, a consistent pattern runs through the detail.

Services are clearly defined, but that doesn’t always translate into understanding how they behave under stress. Mapping exists, but often describes components more than relationships. Scenario testing is taking place, but doesn’t always evolve in a way that meaningfully challenges the system. Vulnerabilities are identified, but the path from insight to resolution can be uneven. Plans exist, but readiness still varies when conditions become less stable. Governance is in place, but continues to lean toward assurance rather than understanding.

None of this really points to failure. If anything, it’s what tends to happen when structure is built at scale.

But taken together, it points to something more structural: a growing distance between what has been constructed and what is actually understood.

In some cases, that distance only becomes visible when the system is already under strain.

Where structure meets behaviour

Resilience programmes tend to stabilise around things that can be defined, documented, and governed over time. Services can be named, dependencies mapped, recovery actions planned, and testing cycles scheduled. These create clarity and allow organisations to explain their position with confidence.

The difficulty is that disruption does not interact with structure in that same way. It interacts with behaviour.

And behaviour is shaped less by what has been written down, and more by how the system actually responds when those conditions start to break down, when dependencies don’t behave as expected, when information becomes incomplete or distorted, or when teams are forced to make decisions without full clarity. Recovery often ends up depending on knowledge that isn’t formally captured anywhere, the kind of things people “just know” until they don’t.

This is where many of the FCA’s observations begin to converge. The issue is not whether firms have mapped their systems, but whether they can see how those systems behave once the conditions that normally hold them together start to erode.

Why the usual tools only get you so far

A lot of what the FCA is pointing to can be traced back to the limits of the tools that organisations naturally rely on.

Mapping, for example, is often very effective at showing structure. What it struggles with is showing how things interact under pressure, including sequencing issues, contention between components, and how failure actually spreads.

Scenario testing runs into something similar, although in a slightly different way. As programmes mature, they tend to become more structured and controlled. That builds confidence and consistency, but it can also reduce the kind of uncertainty that actually shows how the system behaves. Over time, testing can shift from exploration toward confirmation without it being particularly obvious.

Recovery planning has its own version of this. It assumes certain conditions will hold: access is available, steps happen in the right order, people are reachable, systems respond as expected. Most of the time, those assumptions do hold. When they don’t, recovery becomes conditional very quickly, and that conditionality is rarely visible in the plan itself.

Third parties and the question of substitution

The FCA’s focus on third-party dependency is one of the clearest signals in the paper.

Firms are increasingly reliant on a relatively small number of providers, often shared across the market. This is not new, and in many cases it is well understood at a structural level.

What is less clear is what happens when those dependencies are stressed.

Substitution is frequently assumed to be possible, at least in principle. But in practice, it is shaped by integration complexity, switching timelines, data dependencies, and operational friction. What looks interchangeable on paper can become much harder to replace under pressure.

Most organisations know who their critical third parties are. What’s less often tested is what actually happens when you try to move away from one under pressure, not in theory, but in the middle of a live disruption.

This is where concentration risk becomes less about visibility and more about conditions. The dependency is known, but the circumstances required to move away from it are not always fully understood until they are tested in a less controlled way.

What makes this particularly acute is that substitution is rarely tested under realistic conditions. It is understood conceptually, but not exercised in environments where time pressure, coordination friction, and incomplete information are present. In those conditions, what appears substitutable in design can become far less so in practice. This is where dependency risk stops being something you can see, and becomes something you experience, often for the first time during disruption itself.

In practice, substitution often exists as a design assumption rather than an operational capability.

From assurance to understanding

In some cases, the FCA’s findings are more straightforward, reflecting gaps in ownership, tracking, or board engagement. But even where those are addressed, a deeper question remains.

Most firms now have well-developed self-assessments, clear ownership, and board-level engagement. The question is not whether governance exists, but what it is actually optimised to do.

In many cases, it still operates primarily as a mechanism of assurance, reinforcing confidence more than it exposes limits. It provides confidence that the right components are in place and that regulatory expectations have been met.

The FCA’s observations suggest a subtle shift in emphasis. Increasingly, the more important question is not “have we done what is required?” but “do we understand how this service will behave when it is placed under stress?”

Those two questions can lead to very different answers.

Confidence and capability

As resilience programmes mature, confidence tends to stabilise around the structures that have been built. That is both natural and, to an extent, necessary.

Capability evolves differently. It is shaped by changing dependencies, system drift, and the gradual accumulation of ways of working that emerge over time, such as workarounds, informal sequencing, knowledge that sits with specific individuals. These are rarely documented, but they are often what recovery actually depends on.

Over time, this creates a quiet divergence. Confidence becomes anchored in artefacts and evidence such as plans, maps, and test results, while capability remains dependent on conditions that are not always visible.

The gap between the two is rarely explicit in advance. It tends to show up only when systems are tested in ways that fall outside what has already been anticipated.

Learning from disruption

Another area the FCA touches on more briefly is the role of real-world disruption in shaping understanding. Firms are expected to use incidents to calibrate impact tolerances and improve response capability, yet the mechanism for capturing and retaining those insights is often less developed.

In practice, much of what is learned during disruption is transient. It sits in conversations, in workarounds, in the adjustments people make to keep things moving. Very little of that is captured in a structured way, if it’s captured at all. Over time, that limits how effectively organisations convert experience into something reusable.

What clearer visibility begins to look like

Seeing more clearly does not require more structure, but a different emphasis within what already exists.

Testing shifts away from confirming expected pathways and toward exploring where behaviour becomes uncertain. Mapping becomes less about completeness and more about understanding how dependencies interact under strain. Governance moves beyond validating artefacts and toward questioning the conditions those artefacts rely on. Learning becomes less episodic, tied to major incidents, and more continuous, picking up weaker signals before they accumulate into something larger.

None of this introduces anything fundamentally new. It changes what the existing pieces are actually used to reveal.

Reading the paper differently

The FCA’s observations can be read as a set of areas to improve, and many firms will approach them that way.

But they can also be read as signals about where understanding is incomplete.

Not where resilience is absent, but where it is not yet fully understood.

That includes places where mapping describes structure without really revealing interaction, where testing confirms expectations rather than expanding them, where recovery plans rely on assumptions that have not been stressed, and where governance reinforces confidence without necessarily exposing limits.

These are not compliance gaps. They are visibility limits.

Closing thought

Operational resilience has never been about preventing disruption entirely. It is about remaining within tolerance when disruption occurs — and ultimately, about learning.

The FCA’s observations suggest that many organisations are now structurally positioned to do that. What is less certain is whether they understand how quickly that position can change once conditions begin to shift.

The challenge now is not to build more, but to understand more clearly — because the point at which that understanding is tested is rarely controlled, and often comes too late.

That sounds straightforward. In practice, it’s much harder.

Because resilience is not defined by what has been constructed, but by how systems respond when the assumptions that support them are no longer true.

Next
Next

Part II - The Real Crisis Plan Is Behaviour Under Stress