Now that we have pretty much finalised the design phase deliverables, I wanted to share two thoughts on monitoring and evaluation that could be relevant to other projects.
Impact and outcome indicators
Over the last few years, I have watched the development of the DCED Standard for measuring and reporting results for PSD programmes quite closely. A couple of years ago, I had serious misgivings about the way that applying the Standard could distort projects due to the requirement to measure attributable results according to three "Universal Impact Indicators" (outreach, jobs and income). First, it was methodologically impossible (despite some heroic arithmetic and some very grand claims that were made at the time) to isolate the poverty reduction impact of an individual project in this way, and secondly, there were any number of legitimate interventions that did not fit into one or the other of these "universal" indicators.
It is particularly pleasing to see that the 2010 version of the Standard has dropped a lot of the prescriptive approach to calculating impact that was previously required, and most importantly, it is good to see that the Universal Impact Indicators are now in effect "Optional Outcome Indicators". While still using the same terminology, the Standard acknowledges that different programmes will want to use different indicators and also says that the indicators relate to the enterprise (outcome) rather than household (impact) level (precisely because of the methodological issues mentioned above).
It is also very encouraging to hear some of the people most closely associated with the Standard refering to it as being "more about systems than numbers". It has always been pretty solid on systems (particularly the use of results chains), but it all fell apart when it came to the numbers bit. If the thinking behind the Standard really is heading in this direction, then this is very good news.
For ABIF, we will be using enterprise outreach and enterprise income as two of our outcome indicators, but we have also added other indicators for qualitative and value for money related interventions. Based on the market analysis work that we have done, we anticipate that a fair proportion of our interventions could be non-income related, so these other indicators will capture the related outcomes.
We are not planning to quantify ABIF's contribution to changes in household incomes at the impact level. Instead we will use a combination of secondary data, small scale surveys and case studies to link the outcome level results with the observed change at the impact level. This seems to me to be the most convincing, methodologically sound and cost-effective way of describing the poverty reduction impact of our project.
Because of the diversity of investment projects, apart from knowing the total number of end-users, we won't have simple headline figures that can be aggregated for the whole project. But we will have a solid story to tell about our achievements based on an M&E system that is both workable and flexible enough to cope with the range of interventions we are likely to undertake. It also means, unlike some of the numbers that have been produced in the past (and are still repeated to this day), we will have hard evidence to back up the results that we will report.
Baselines
One of the other serious concerns that I have had with approaches to M&E in the past is with the way that baseline and the eventual intervention impact data collection was managed. It always struck me that with any intervention, trying to separate out the change in any given indicator by comparing the situation before the intervention with the situation after the intervention was fraught with problems (mostly to do with the impact of external factors that could not be controlled and contaminated samples etc). It also seemed like a lot of unnecessary work to end up with a result which at best was broadly indicative of some change.
For ABIF, we plan to use a different approach which will be cheaper and easier, but produce results that are just as good. The idea is simply to use a simultaneous baseline, comparing users of a product or service with non-users. It could be argued that our users will be self-selecting etc, but that would be true of any market based intervention. What we will be able to say though is that in identical circumstances (same weather, same third party interventions etc), the users of the product or service experienced (or didn't experience) a benefit that can somehow be quantified by comparing them with non-users. So rather than doing "before and after" type surveys, we will do "users and non-users".