First, do no harm…

Published by Tony Quinlan on

In recent months, I’ve been lucky enough to work with a variety of people dealing with problems and situations that are highly complex. In these environments, it’s not possible to boil issues down to a root cause and plan an ideal solution. (Something that jumped to mind this morning as I got annoyed listening to the Today Programme on BBC Radio 4 – an interviewer responding to an interviewee talking about the difficulties of a situation: “that’s hardly the ideal solution”. The very concept of “ideal solutions” is hopelessly misguided in complex situations – overly simplistic and damaging in its drive to make people come up with ideal solutions to real-world problems.)

The real solution is to come up with multiple actions and activities within the complex environment that allow us to learn about what is going on – and to see elements that we can take advantages of. Essentially, it’s an “exploration” of what happens when we interact with the situation.

I always recommend that people use whatever language allows them greatest leeway in their organisation – for some it’s “pilot”, for others “probe” or “experiment”. The emphasis is away from “fail-safe plans” to “safe-to-fail probes” – to use Dave Snowden’s language.

I have noticed, however, that there’s often a strong reaction to that language: “safe-to-fail.” The responses are usually “we can’t fail” or “failure is never safe”.

I get that – I understand people’s reaction to that language, but it goes get in the way of the concept. If you can’t be seen to fail, then you cannot, by definition, do anything new and untested, let alone be truly innovative.

The truth is you can be safe-to-fail, in fact you have to be. The key is usually to design actions/probes/pilots that are small enough and tangential (cf my post on Obliquity) so that if/when they fail, they do no harm.

It’s actually remarkably easy, if you’ve got diverse perspectives working together on the problem and you’re working at a granular enough level.

For example, I sat watching a group looking at intervening in a natural disaster situation – extremely effective, intelligent people coming up with interventions that would be “safe”: funding local villages to find their own water (on the basis that they may know about natural resources, but didn’t have the wherewithal to transport it). All their action plans were at a fairly high level.

I resisted getting involved as long as I could, letting them work through the problem on their own, but it became apparent that a lack of diversity was trapping them in their standard models. Instead, I suggested, how about putting in three simple wooden noticeboards in the centre of each village, along with a camera and printer – one noticeboard for pictures of the missing, one for pictures and contact details of people who were still alive and one for those confirmed dead.

Immediately, they shifted perspective – this was safe-to-fail but at a far more local level than they’d envisaged. But they recognised a) it was safe – what harm could it do? b) if it succeeded it had a number of benefits – a “honeypot” to draw local villagers to a place they could be engaged and c) the resources required were tiny. And it would also give them a useful monitoring tool for the situation – no great numbers, but a sense of what was happening.

The upshot is that, for all the fear of the “f-word”, safe-to-fail probes are relatively simple to produce:

  1. Do no harm
  2. Make them low-resource
  3. Make them situation-relevant (much easier if you have narrative material coming in)
  4. Monitor them
  5. Increase the diversity of people looking at the problem