Last week I had the pleasure of attending a conference on M4P approaches to development organised by the M4P Hub in Brighton, UK. It was interesting to contrast the way that donors and development project implementers talk about their work and the arguments that Tim Harford puts forward. Three things in particular struck me from the conference:
- Donors continue to demand that projects calculate very narrowly defined attributable impact to demonstrate the success of M4P projects, without describing how these results can be credibly calculated;
- Based on the projects that were described in case studies presented during the conference, every M4P project has been a success; and
- The majority of project case studies were descriptive, there was very little analysis of why things had worked out as they did.
So, far from looking back on the last 10 years or so since M4P began to emerge armed with a credible results assessment methodology and being willing and able to talk about learning from failure as well as success, it appears that:
- We still have no way of knowing what impact the majority of M4P interventions have had (because often we have been measuring the wrong things using the wrong tools); and
- We don't have an atmosphere that encourages analytical thinking and a systematic approach of putting the success stories we hear into a wider context of experimentation and failure.
I hope that in due course we will have our fair share of success stories and that we will be able to talk about our impact in a meaningful way. However, this will mean looking at some new monitoring and evaluation tools and really trying to understand more about the context of our interventions and the evidence of results that we can gather. It also requires being willing and able to report failures so that we and others can learn from our experiences.
It is obvious from the discussions I heard in Brighton that despite the repeated (and somewhat tired) insistence by donors that projects use rigorous approaches ("start with the results you want and then work on the methodology", as if there is a credible methodology to produce any results that may be required), there is still some way to go before there is broad consensus on what M4P projects can measure with any certainty. You would have thought that given all the attention that this subject has received, if it were all so simple, somebody would have come up with a workable answer by now! Nevertheless, there clearly is room for improving on the current situation and we will do our bit to make a positive contribution to the process.