[Reading time: 2 minutes]
Earlier this week, there was a major failure at one of Dublin’s main train stations. It caused knock-on effects across most of the country’s rail network. A spokesperson for Irish Rail stated that the issue was caused by a failure in ’the whole signalling system’.
When a major system fails in any organisation, it can cause mayhem.
This is why we have Business Continuity Plans (BCP’s).
BCP’s set out how (or if?) we will continue to operate if a particular event occurs.
Going through an exercise to document a BCP can be enlightening.
It gives an objective view on what could be a manageable event and what could be a catastrophic event.
When the impact of a catastrophic event is recognised, we are more likely to work hard to reduce the likelihood of such an event occurring.
For example:
- We may finally upgrade a system to reduce the risk of the old version and its old hardware failing.
- We may move away from a third party that runs a key process for us but appears to be in a precarious financial state.
I’ve helped businesses to consider their BCP plans or, at an IT system level, their Disaster Recovery (DR) plans.
What’s interesting is how the scale of a system or complexity of a process influences the complexity of the DR / BCP plan.
The more functions performed by a system or third party, the more difficult and complex it is to work out how a business can continue to operate in the event of a failure or issue with that system or third party.
There is usually a linear relationship between the scale of a system or process, and the complexity of its DR plan.
The fewer things a system or third party does, the easier it is for a business to recover from a failure with that system or third party.
Smaller systems. Smaller risks.