Catching up – Learning From Incidents slides

Published by Tony Quinlan on

It’s been an “interesting” few weeks at breakneck pace, so I’m a bit behind in putting things up.  So, catching up bit by bit – here are the slides from the ESRC Learning From Incidents seminar at Southampton University on 15th October, along with some reflections from the morning.

Download 002 Learning from Incidents – Using SenseMaker®

I had to rush away after lunch – missing some sessions that looked particularly interesting that afternoon.  But I was particularly struck by a couple of comments in the morning.

Firstly, that the toughest element of preventing incidents (by which we’re talking power station problems and signficant events, not minor health and safety incidents) boils down to human beings and their biases and distractions.  Again, the piece that strikes me is what I’ve talked about in other environments like financial services and national security – expecting people to report things that are out of the ordinary is a dangerous approach, doomed to failure.

People don’t notice things – and in organisations, a gradual shift (in attitudes or dispositions in particular) is more difficult to spot. And the investment required by a whistleblower is too high to rely on (a separate post on alternatives to whistleblowing is long overdue).

Measuring compliance to some procedure or set of rules – the usual solution – is also a “lag indicator”, i.e. it identifies problems after they’ve happened and become significant enough to notice.  Measuring underlying dispositions instead is a “lead indicator”, i.e. before they manifest in the dangerous actions we’re looking to avoid.

The second comment, made by Professor Vaughan Pomeroy in passing, was easy to miss in so much interesting material, but strikes to the heart of an issue that’s often missed.

Mention was made of “exercises” in which people were put through scenarios that were to resemble potential problems (meltdowns, crashes, etc).  Much of the emphasis was being put on how to make them more lifelike, more likely to generate the adrenaline and cognitive problems of a real situation.  But Prof Pomeroy commented that after such enactments the response was likely to be complacency or over-confidence in the team’s ability to manage such situations.  (I paraphrase poorly – I’ve waited too long to write this blog.)

Exercises – whether simulated crises as many organisations run (including the hostile media interviews I remember setting up in the past for senior colleagues) or military war-room exercises (and the in-field versions as well) – tend to make sure that they only exercise participants to the point of success, never failure.

Far healthier to push people past that to realise that full solutions are not always possible and partial failure may be inevitable. Failure – as Cognitive Edge have worked on with Singapore senior civil servants – is a better route to increase the likelihood of detecting problems early and adapting to recover.

(This whole subject also needs to include a resilience vs robustness debate, but there’s only so much I’m prepared to throw into a single blog post…)

And for those who understand the “red-shirt” reference I made in the presentation slides, we are, of course, talking about the Kobayashi Maru or the Bridge Officer’s Test.

 

For those at the event in Southampton – the following might also be of interest: a more academic paper on the narrative research approach I was describing.

Download 100816 Narrative-Research_Snowden FINAL