Attribution is complicated, targets are unable to see contribution

Published by Tony Quinlan on

I’m spending time this afternoon at the launch of the UK office of the International Initiative for Impact Evaluation (3ie). While it’s a subject I’m keenly interested in, it’s not one in which I’ve got depth of understanding. I struggle sometimes to understand some of the conversations, but many of the issues around things like monitoring and evaluation have distinct similarities with other sectors – like organisational culture, education policy, community coherence/cohesion – that I do understand and work in.

A couple of weeks ago, I was down at the University of Sussex at the Institute of Development Studies for an excellent event called The Big Push Back. The issue at hand was around the need and drive for targets/measures/evaluation of projects – and yet those same targets tend to be over-simplistic.

How many children innoculated, how many wells dug, etc – metrics that are simple for donors and administrators/managers to understand. It’s either a) bigger, b) smaller or c) stayed the same.

The consensus the other week – and to some degree today – is that these targets may satisfy donors, but don’t meet the agencies’ own needs:

  • What they want to measure is the impact of programmes, but this has been too problematic – the targets were often introduced as “something we can measure”.
  • Targets often drive behaviours that distract (or at worst contradict) the real impact that is desired. (If you’ve got a target of, for instance, children to be inoculated, the encouraged behaviour is to keep inoculating, regardless of need or appropriateness.) Building a well is a measurable piece of a programme, but may disrupt the village economy – there may have been a slightly distant water source with villagers earning small amounts by delivering the water to houses. Result: one well built. Not necessarily used (some villagers will stick with their “local suppliers”) and potentially damaging to the fabric of the village.
  • The targets carry no context with them – potentially leading to techniques perceived as successful in one area being used to design processes for universal use. But the success may have been due to a specific factor in the first programme. An increase in school attendance may be assumed to be a consequence of a new UN education programme, but may coincide with a poor crop that did not require children for harvest.
  • Targets – and donors – want to determine attribution: “Point to the bit you did”. But the truth is that these are complex situations – many things make a contribution, but rarely can a single intervention, factor or programme be attributed with making the difference.

[While these descriptions are all of the development world, exactly the same applies in any organisation applying targets to complex situations.]

Targets are often singled out as the problem. Yet that’s not really true. There are places where targets may be useful and appropriate. The real problem is that they”ve been applied beyond the limits of where they work. And the application of targets is a symptom of complicated thinking, rather than complex thinking.

At these events, there is an increasing ground-swell driving and exploring new ways of evaluating programmes that works better for all parties. One of the clear applications that we’re working with SenseMaker on is a new impact measurement system that collects – quickly and easily – large volumes of narrative and fragementary qualitative data from all participants in a programme – from the recipients to the field workers, the country office, international agency and, ultimately, the donor.

The indications are that it offers significantly better results:

  • Gives voice to the recipients, without allowing “experts” to re-interpret their meaning
  • Allows measurement of soft impact measures – without resorting to false-reporting direct questionnaires
  • Carries context throughout the process – interesting results are underpinned by stories that explain what happened. Making it easier to decide why a programme succeeded or failed – and reducing the risk of making a poor assumption
  • Demonstrates the different perspectives of different levels of people.

There are other approaches also being trialled at teh moment – and one of the exciting thing about all of this is the opportunity to see and participate in the evolution of the next generation of M&E tools.