Researching the politics of development

Blog

Share

Learning why and how reform works will improve UK aid


11 August 2014.
By Leni Wild.
Another week, another review of the UK’s Department for International Development (DFID): yet, amongst the scrutiny, not enough attention is paid to ensuring how DFID can learn from why certain approaches work better than others.
Last week, another enquiry into DFID’s internal workings was published, this time by the Cabinet Office and DFID’s Evidence into Action team. This is the latest addition, with others including the Independent Commission for Aid Impact (ICAI)’s evaluation of how DFID learns, an internal review of DFID’s programme management incentives, capabilities and processes, and an inquiry, still underway, by the International Development Select Committee into the future of the UK’s approach to development.
There is also a growing body of – often DFID-funded – external research into DFID’s policy and programming. Recent highlights for me have been work by the Effective States and Inclusive Development research consortium into administrative and managerial barriers to the update of political economy thinking and ODI’s work on the politics of security and justice programming.
These reviews have reached mixed conclusions and, in some cases, contradict each other: where ICAI criticises DFID’s use of evidence – giving the agency an ‘amber red’ (a relatively poor marking) – last week’s review found that DFID exemplifies good practice in its use of evidence in policy making.
Common themes emerging include the exhortation that DFID needs to learn more from its actions, both successes and failures, thereby involving more trial and error, adaptation, experimentation and iteration; the importance of leadership, and strengthening a working culture which can take seriously feedback and impact; and problematic internal incentives, such as disbursement pressure and high staff turnover, which limit DFID’s capacity to learn from what works and what does not.
All these make clear the importance of understanding DFID’s internal workings, and not just how well aid recipients spend money – which will enable us, in turn, to better understand how UK aid can meet its aims.
Indeed, aid agencies have long been described as facing ‘broken feedback loops’, in that those who benefit from international aid are not the same as those who fund it, as explored by Elinor Ostrom and co’s analysis of Sweden’s SIDA. Ostrom’s argument – that to correct this, agencies need to build greater individual and collective learning on effectiveness and failure – also applies to DFID.
A key problem with DFID’s internal workings – as these reviews highlight, and as Ostrom found in the case of SIDA – is that knowledge acquired from DFID’s own work is not harnessed in making current or future programming decisions. While discussion about this point has focussed largely on which types of evidence matter most (focusing on experimental approaches and impact evaluations), less attention has been paid to highlighting the fact that understanding the context of what works is important – and DFID needs to improve its capacity to understand and learn from why certain approaches work better than others.
An example from recent research conducted by ODI in the Philippines exemplifies this: a reform of what was known as the ‘sin tax’, involving restructuring excise tax on alcohol and tobacco, yielded approximately US$1.18 billion in 2013, close to 80% of which was earmarked to health-care subsidies for the poorest.
However, the take away shouldn’t be that the ‘sin tax’ could or should be replicated elsewhere – or, indeed, that such a reform will necessarily lead to pro-poor revenue raising. Rather, what matters most is the underlying explanation of why the reform happened. In this case, it involved a network of players inside and outside government, drawn together by effective ‘development entrepreneurship’ from The Asia Foundation, in a coalition that could push through real change. It required long-term and flexible funding of reformers, with space for trial and adaptation.
This example, showing why and how reform is possible and what the implications for policymakers might be, can enable cross sector or cadre learning too – economists, governance advisers, health specialists and others can all share these insights and apply them to their own work.
Turning the spotlight onto ‘what works’ therefore really matters. But if DFID is to improve its ability to learn, more focus is needed on how to strengthen feedback loops and how to learn from why something works or not. This will require both better use of evidence and incentives to act on that evidence, including for programme design.
 
Leni Wild is a Research Fellow on Politics and Governace at ODI.