Maybe it’s a result of living in Brexit Britain or of watching America, and feeling battered by prolonged exposure to polarisation. But I felt a similar feeling of weariness in looking at my (admittedly nerdy) twitter feed and favoured blogs after the announcement last week that Esther Duflo, Abhijit Banerjee and Michael Kremer had won the Nobel Prize for Economics.
Duflo et al are what are sometimes referred to as ‘randomistas’, proponents of randomised controlled trials – or RCTs: experiments to test an intervention or different interventions against a ‘control’ of non-intervention. Their work for the Jameel Poverty Action Lab (J-PAL) in MIT brought them to prominence in the 2000s, with RCTs in developing countries bringing to light all sorts of interesting findings about what worked and what didn’t on issues from teacher absenteeism to vaccination.
The award of the Nobel prize has caused the resurrection of what the evaluator Michael Quinn Patton calls ‘the zombie’ of paradigm wars between those who favour quantitative data, “hard” measurements, statistics and generalisations to explain a linear, mechanical world, over ‘soft’ qualitative data, stories, and context-specificity to explain a diverse and complex world. There are some historical heavyweights supporting each side. The ‘quants’ have Galileo in their corner: “Measure what is measurable, and make measurable what is not so”. And the quals have Einstein: “everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted”.
The argument about who is right is a zombie, Patton says, because although the intellectual debate sometimes runs out of steam, it never seems to die. Most evaluators and researchers have reached the sensible conclusion that each method has its place and often a combination of methods is best.
The thing is, while you want an RCT to test whether something specific like ‘does a new drug work better than an old one, or no drugs’, you can’t experiment with some things, like which is the best place for an airport, or which approach to advocating for a single policy reform by a particular government works best. And although sometimes you want to evaluate how much of a change you can ‘attribute’ to an intervention and could use an RCT, at other times you really want to get an understanding of the ‘how’, ‘why’ and ‘for whom’ questions, which are best answered with other methods. And one good RCT result doesn’t mean that what worked here will work over there in a different context.
Similarly, any type of research can be done – and used – well or badly. The ethics of randomisation needs to be approached with great care, and there are embarrassing examples of RCTs that were not thought through properly to avoid harm. Furthermore, one successful RCT does not mean that you have ‘proved what works’ and can expect the same results in totally different contexts. And when your approach can be framed as “experimenting on the poor”, you are obliged to approach your work with particularly high standards of ethics and integrity. But every method has shortcomings and flaws, as well as strengths.
Personally, I really enjoyed reading Banerjee & Duflo’s book “Poor Economics”. I liked how they gave both hard numbers and clear explanations of the stories behind those numbers, suggesting that they also had the sorts of conversations with people that are the hallmark of qualitative research. In Save the Children we use all sorts of methods. Occasionally we do RCTs. They’re tricky beasts to get right in real world situations, and often expensive, but they can be really enlightening. One of my personal favourite research experiences was working in Zimbabwe with a statistician, using Save the Children’s Household Economy Analysis data to make more sense of a national household survey data set to model and target emergency food needs. And usually in Save the Children, we report children’s own voices and experiences, providing some of the most powerful evidence. Researchers need a toolbox of methods, and there are resources available to help guide your choice of the appropriate tools for the job at hand.
Will RCTs and experiments answer all the questions about poverty in the world? Definitely not, and they should never claim to. Is that a reason to criticise the Nobel Prize winners? I don’t think so; their work is important, if bounded. I don’t remember anyone criticizing Nelson Mandela for not solving conflict in the Middle East when he got the Peace Prize. Can we put the zombie away again please?