It seems that impact is everywhere at the moment. Is our research having impact? Is our teaching having impact? Are conservation actions having the intended impact? These are very important questions, but they are also very difficult to answer, particularly when the things we hope to affect are complex processes that are terribly difficult to untangle (the policy making process, the career development of alumni, or the trajectories of socioecological systems).
Last week Professor Paul Ferraro from Johns Hopkins visited Cambridge as the Humanitas Visiting Professor in Sustainability Studies. Paul is a world leader in analysing the impact of conservation interventions, and a strong advocate of quantitative analysis that properly accounts for things like selection bias in treatments (e.g. the location of protected areas is non-random and this should be considered in impact evaluation). He is in the vanguard of a growing industry of people doing similar work, which increasingly fills the pages of the conservation science journals. To me, this research is fascinating and provides important insights that often challenge widely-held assumptions about things like the relationship between protected areas and poverty.
Watching Paul’s lectures, I was struck by the fact that although the methods he uses are completely different to those generally favoured by political ecologists, the answers provided by the two approaches can be remarkably similar. For example, Paul’s first lecture (or at least the part I saw before going home to the kids) focused on the weakness of assumptions that are made about why people will adopt and use technical interventions such as cook stoves or water-efficient shower heads. He argued that these interventions are designed in the lab or agricultural field station using an ‘engineering approach’, but rarely achieve the promised environmental benefits when rolled out because under real world conditions people simply don’t behave as expected. This might be because they take longer showers because they feel less guilty, or any number of different mechanisms. In some cases, he showed that the interventions actually have precisely the opposite of the intended effect, for example where water-efficient irrigation technologies lead to increased water use as farmers increase their acreage and switch to more thirsty crops.
There is a significant body of political ecology research (or anthropology, sociology, or whatever other label you wish to use) that is also interested in why technical interventions fail. To give one example, Paige West’s research with the Gimi people in Papua New Guinea shows how an Integrated Conservation and Development project that intended to incentivise conservation through economic value creation failed because it did not take account of the way in which the Gimi valued and understood the “forest to be part of a series of transactive dialectical relationships that work to produce identity and space”. For the Gimi the forest was the home of their ancestors and central to their worldview, and not something that could be commodified for market exchange.
Paul Ferraro’s methods are quantitative and experimental, whereas Paige West’s are qualitative and ethnographic. These approaches build from very different epistemological starting points and are often seen as incompatible. Yet despite these profound differences, both arrive at similar answers to the big question of why technically designed interventions fail: that is, they fail because they don’t take proper account of how people think and behave. I find this commonality striking and encouraging. Striking because arriving at similar answers with different methods provides some confidence in the strength of the conclusions, and encouraging because it suggests more scope for mutually valuable collaboration and exchange of ideas across disciplines that sometimes feel separated by an unbridgeable chasm.
This is very nice, but it still leaves an important question to think about. Is one methodological approach always ‘better’ than the other (one size fits all), or are there particular questions that are better tackled with particular methods, or combinations of methods (horses for courses)? If one size fits all, we might think that Paige West’s work on Gimi cosmovisions provides a useful starting point just waiting to be confirmed by large N behavioural science research, or that Paul Ferraro’s work on the uptake of shower-heads needs validation by ethnographic participant observation of people in the shower (which would make for an interesting ethics application!).
To me this makes no sense – in fact different methods work better for different questions and contexts – horses for courses. Large N studies based on randomised controlled trials or similar methods are fantastic for understanding uptake of a discrete technology by a large population of people who could be exposed to it in an experimental manner. However, there are many other kinds of question where a more qualitative or non-experimental approach is more suitable – for example for understanding the impact of big policy changes where randomisation or large sample sizes are impossible (there is only one global economy…), or the detailed workings of power within a particular institution. In many cases a combination of different approaches – mixed methods – is the way to go, providing that this does not end up as more than one method done badly.
One key question that I see as best tackled with the theory and methods of political ecology is why technically framed ‘interventions’ that are unlikely to work keep getting implemented in conservation in the first place? To answer this question we need to move beyond the assumption that those implementing the policy really do expect it to work and are just ignorant of the realities of human behaviour. Instead, we need to think about the political economy that surrounds decision making about which actions to pursue. What do the key stakeholders (donors, voters, self-interested politicians) want? What other objectives (national pride, votes in the next election, donor satisfaction) might those making decisions have in mind besides conservation outcomes? And so on.
Tania Li, a social anthropologist who has investigated such questions in Indonesia, has suggested that deeply political projects and policies are often ‘rendered technical’ (i.e. presented as nothing more than technical – Ferraro’s ‘engineering approach’) because doing so frames them as a matter for experts rather than a matter for public deliberation. This privileges the role of elite technicians and politicians and closes off the space for the engagement of non-experts. Insofar as this is a deliberate act, it can be seen as an example of what James Ferguson calls antipolitics – that is ‘the political act of doing away with politics’. From this perspective a conservation project that seems a failure in terms of the publicly stated objectives might be re-interpreted as a success once we have a better idea of what those who designed it really wanted or were responding to. These are important ideas that help to explain why even the very best evidence about the impacts of conservation interventions can fail to influence future decisions. I find it hard to believe that such insights could be arrived at through quantitative experimentation, but I’m open to hear from anyone who disagrees.
My overall sense following Paul Ferraro’s visit is that the growing impact evaluation industry is getting very good at answering certain important questions in conservation. The answers should be taken seriously, and can be surprisingly similar to those found with the very different methods of political ecology and other cognate social science disciplines. However, the impact evaluation approach is less suited to answering questions about the social and political conditions from which particular ideas about ‘interventions’ and their implementation emerge; questions that are, in my view, as important as the impact work. I see great potential for interdisciplinary collaboration between those addressing these different questions, but it isn’t easy, for all the reasons that I and many others have written about. I hope that those in our different fields will be able to take up the challenge to work together. If not, I worry that as the quantitative impact evaluation movement gathers momentum it may crowd out important, alternative, and potentially complementary ways of thinking about where conservation actions come from and the impacts they have.