Is Accountability of Aid the same as Countability of Aid?
Last week the Belgian Ambassador to the UK held a “salon” at his residence, on the subject of aid accountability and measurement, and a big focus was on Results. The three speakers were Paul Collier, Professor of Development Economics, Oxford University, Stefan Dercon, Chief Economist DFID and Professor of Development Economics, Oxford University and Marcus Leroy, Senior Advisor to the Belgian Minister of Development Cooperation.
It was particularly interesting to hear the views of Prof Dercon, who plays a leading role at DFID in the process of analyzing the business case made for all major new projects. A few highlights from the presentations and discussion:
- In the past, spurious “bad evidence” was often used to justify big DFID projects. Taxpayers were also wrongly treated as fools who could not be expected to understand how their taxes were being spent overseas. So more rigorous and transparent evaluation of proposed DFID projects responds to a genuine problem. Nevertheless, there is a real risk that oversimplification will lead to a rejection of the good projects which can make an important difference which is hard to measure or count.
- We are entering the Second Age of the Development Economist, who is recapturing the space s/he lost to political and social scientists over the past fifteen years. Not only that, but also the resuscitation of the lost art of cost-benefit analysis, which had fallen out of favour even among many economists. This is not necessarily a bad thing, as long as we keep in mind that for development projects, CBA is not a bad way to compare alternatives, but not suitable for use as an absolute evaluation tool. Prof Dercon spoke of the need to provide “five or six alternatives” against which to compare the proposal.
- There is a real danger of bias in the system of cost-benefit analyses and making business cases for DFID projects, because staff will increasingly send proposals for approval which are overly linear, have very obvious and countable outcomes, and are based on what has been done elsewhere. I.e. a combination of simplicity-bias and risk-aversion. Prof Dercon acknowledged this, but claimed it would be to some extent counteracted by their use of the average predicted outcomes when evaluating proposed projects. (I.e. I think he meant they ignore the worst and best case, and use the mean case to evaluate the benefits of proposals).
- … Not necessarily linked to this, but DFID are apparently undertaking a review of their capacity to innovate, which might mitigate the risk-aversion bias to some extent. Meanwhile, the Research and Evidence Dept at DFID has apparently grown from 12 to 240 staff – indicative of the shift in thinking form faith-based to evidence-based programming perhaps.
- The point was made by all speakers that aid as a sector has a ridiculous approach to risk. E.g. if you believe the official story, everything works, and very little money is lost or wasted. That approach would never work in business: there’d be no entrepreneurs and no creativity. Or is it, as Marcus Leroy said, that there’s a “truth aversion”, rather than a risk-aversion….?
- Dercon also accepted that in some cases, such as work on reconciliation in fragile contexts, where you simply can’t quantify your expected outcomes, nor produce 5-6 alternatives to compare them against, you simply have to be very clear about your theory of change and evaluate the business case on that basis. But even in such cases you still need to cite data from places where similar programming has worked. (Which might be seen as a strong discouragement to taking the context as your starting point, as the OECD-DAC rightly suggests aid agencies ought to do).
- What is the cost-benefit of doing cost-benefit analysis? One DFID-funded implementing agency which was represented at the meeting estimate they have spent 25% of their staff effort in one project over the past year fiddling around with this issue and revising and re-re-revising their logframe (which has over 200 indicators!).
- The point was made that it’s ironic that DFID is pushing for its projects to be justified with a business case, when it has made an entirely political decision to increase its budget by 30% over the next two years. Paul Collier, in his reply, gave the best answer I have yet to hear in justification of the 0.7% GNI figure. As I understand it, it goes like this: ODA is a global public good, and is therefore almost by definition under-supplied. Therefore it makes sense for a progressive country (the UK) to make a commitment to supply it at a rate which is higher than others, as a way to counteract their undersupply. Quite nifty, I thought.
Interesting stuff. It’s certainly right that the UK’s aid money should be spent transparently and with a view to making a difference according to a clearly articulated theory of change. But getting the balance right between what’s most easily measured now and what makes the most important difference in people’s lives over the long term will be hard.
As mentioned above, CBA is a decent enough method of evaluating alternative uses of capital. So for example, it helps determine whether a given sum of money is best used on project A or project B. But what it does not do is provide the information you need to determine if it is the most appropriate action to take in a given context. For that, you still need rigorous and creative context analysis which asks questions about how people can build a sustainably peaceful and equitably prosperous society; whether there is a role for outside agencies, and if so what role? That is a very different problem than the much simpler one of whether we should spend our money on this or that.
This is not to say that DFID, staffed by highly competent individuals and teams, ignores the need to invest in solid project design. But Andrew Natsios had it right when he said that the most easily measurable projects are the least transformative, and vice versa. We must take care not to let institutional incentives make the staff of organisations like DFID too risk-averse and less able to work in the fragile and conflict-affected contexts they are committed to helping. Accountabilty does not equal Countability.
Thanks for sharing these thoughts. Over the years as a practitioner in the development industry, I’ve witnessed the sector as a whole demonstrate an increasing desperation to “know” what is often inherently beyond logic and induction.
It is certainly time to examine our belief that there are technocratic, precise ways of measuring progress in order to make consequential judgments based on these measures. The increasing obsession with abstract metrics and experimental design, stemming from a reductive, managerial approach in development, is quite far from the intimate, difficult, and complex factors at play at the grassroots level.
As someone who has worked extensively with grassroots organizations and “implementing partners” in Africa, imposing expectations to try to evaluate every single intervention on people who are in the process of organizing at the local level is most certainly a drain on their time and scarce resources. The business sector seems to have a healthier relationship with risk in their for-profit endeavours, something we may need to explore in the development sector.
My hope is that the dominance of quantitative statistical information as the sole, authoritative source of knowledge can be challenged so that we embrace much richer ways of thinking about development and of assessing the realities of what is happening closer to the ground.