It seems that impact is everywhere at the moment. Is our research having impact? Is our teaching having impact? Are conservation actions having the intended impact? These are very important questions, but they are also very difficult to answer, particularly when the things we hope to affect are complex processes that are terribly difficult to untangle (the policy making process, the career development of alumni, or the trajectories of socioecological systems).
Last week Professor Paul Ferraro from Johns Hopkins visited Cambridge as the Humanitas Visiting Professor in Sustainability Studies. Paul is a world leader in analysing the impact of conservation interventions, and a strong advocate of quantitative analysis that properly accounts for things like selection bias in treatments (e.g. the location of protected areas is non-random and this should be considered in impact evaluation). He is in the vanguard of a growing industry of people doing similar work, which increasingly fills the pages of the conservation science journals. To me, this research is fascinating and provides important insights that often challenge widely-held assumptions about things like the relationship between protected areas and poverty.
Watching Paul’s lectures, I was struck by the fact that although the methods he uses are completely different to those generally favoured by political ecologists, the answers provided by the two approaches can be remarkably similar. For example, Paul’s first lecture (or at least the part I saw before going home to the kids) focused on the weakness of assumptions that are made about why people will adopt and use technical interventions such as cook stoves or water-efficient shower heads. He argued that these interventions are designed in the lab or agricultural field station using an ‘engineering approach’, but rarely achieve the promised environmental benefits when rolled out because under real world conditions people simply don’t behave as expected. This might be because they take longer showers because they feel less guilty, or any number of different mechanisms. In some cases, he showed that the interventions actually have precisely the opposite of the intended effect, for example where water-efficient irrigation technologies lead to increased water use as farmers increase their acreage and switch to more thirsty crops.
There is a significant body of political ecology research (or anthropology, sociology, or whatever other label you wish to use) that is also interested in why technical interventions fail. To give one example, Paige West’s research with the Gimi people in Papua New Guinea shows how an Integrated Conservation and Development project that intended to incentivise conservation through economic value creation failed because it did not take account of the way in which the Gimi valued and understood the “forest to be part of a series of transactive dialectical relationships that work to produce identity and space”. For the Gimi the forest was the home of their ancestors and central to their worldview, and not something that could be commodified for market exchange.
Paul Ferraro’s methods are quantitative and experimental, whereas Paige West’s are qualitative and ethnographic. These approaches build from very different epistemological starting points and are often seen as incompatible. Yet despite these profound differences, both arrive at similar answers to the big question of why technically designed interventions fail: that is, they fail because they don’t take proper account of how people think and behave. I find this commonality striking and encouraging. Striking because arriving at similar answers with different methods provides some confidence in the strength of the conclusions, and encouraging because it suggests more scope for mutually valuable collaboration and exchange of ideas across disciplines that sometimes feel separated by an unbridgeable chasm.
This is very nice, but it still leaves an important question to think about. Is one methodological approach always ‘better’ than the other (one size fits all), or are there particular questions that are better tackled with particular methods, or combinations of methods (horses for courses)? If one size fits all, we might think that Paige West’s work on Gimi cosmovisions provides a useful starting point just waiting to be confirmed by large N behavioural science research, or that Paul Ferraro’s work on the uptake of shower-heads needs validation by ethnographic participant observation of people in the shower (which would make for an interesting ethics application!).
To me this makes no sense – in fact different methods work better for different questions and contexts – horses for courses. Large N studies based on randomised controlled trials or similar methods are fantastic for understanding uptake of a discrete technology by a large population of people who could be exposed to it in an experimental manner. However, there are many other kinds of question where a more qualitative or non-experimental approach is more suitable – for example for understanding the impact of big policy changes where randomisation or large sample sizes are impossible (there is only one global economy…), or the detailed workings of power within a particular institution. In many cases a combination of different approaches – mixed methods – is the way to go, providing that this does not end up as more than one method done badly.
One key question that I see as best tackled with the theory and methods of political ecology is why technically framed ‘interventions’ that are unlikely to work keep getting implemented in conservation in the first place? To answer this question we need to move beyond the assumption that those implementing the policy really do expect it to work and are just ignorant of the realities of human behaviour. Instead, we need to think about the political economy that surrounds decision making about which actions to pursue. What do the key stakeholders (donors, voters, self-interested politicians) want? What other objectives (national pride, votes in the next election, donor satisfaction) might those making decisions have in mind besides conservation outcomes? And so on.
Tania Li, a social anthropologist who has investigated such questions in Indonesia, has suggested that deeply political projects and policies are often ‘rendered technical’ (i.e. presented as nothing more than technical – Ferraro’s ‘engineering approach’) because doing so frames them as a matter for experts rather than a matter for public deliberation. This privileges the role of elite technicians and politicians and closes off the space for the engagement of non-experts. Insofar as this is a deliberate act, it can be seen as an example of what James Ferguson calls antipolitics – that is ‘the political act of doing away with politics’. From this perspective a conservation project that seems a failure in terms of the publicly stated objectives might be re-interpreted as a success once we have a better idea of what those who designed it really wanted or were responding to. These are important ideas that help to explain why even the very best evidence about the impacts of conservation interventions can fail to influence future decisions. I find it hard to believe that such insights could be arrived at through quantitative experimentation, but I’m open to hear from anyone who disagrees.
My overall sense following Paul Ferraro’s visit is that the growing impact evaluation industry is getting very good at answering certain important questions in conservation. The answers should be taken seriously, and can be surprisingly similar to those found with the very different methods of political ecology and other cognate social science disciplines. However, the impact evaluation approach is less suited to answering questions about the social and political conditions from which particular ideas about ‘interventions’ and their implementation emerge; questions that are, in my view, as important as the impact work. I see great potential for interdisciplinary collaboration between those addressing these different questions, but it isn’t easy, for all the reasons that I and many others have written about. I hope that those in our different fields will be able to take up the challenge to work together. If not, I worry that as the quantitative impact evaluation movement gathers momentum it may crowd out important, alternative, and potentially complementary ways of thinking about where conservation actions come from and the impacts they have.
time to plug the Social Assessment of Protected Areas project and our newly launched manual – http://pubs.iied.org/14659IIED.html?c=biodiv
Thanks for this. It is enjoyable yet frustrating to engage with large scale quantitative analyses as a conservation social scientists, but it can produce some pretty productive results (we will have a PHD student starting shortly who will be combining quasi-experiment spatial analysis and political ecology approaches to understand the impacts of private protected areas, so I may change my opinion on such methodological cocktails in future!). It is also a good way of replying to the turn towards holding up randomised controlled trials as the gold standard in conservation and development.
I had some discussions recently with some sociologists who have developed realist methods for combining large scale statistical analyses of health interventions with detailed qualitative research that explains the variations identified in the data. These studies have looked not at drugs trials, but other interventions, such as whether a particular type of mattress reduce the numbers of patients suffering bedsores, by understanding not just analysing the statistics of what happens, but understanding how hospital staff and patients think and behave differently when using one type of mattress rather than another. I think there is real potential for using such methods for understanding the impacts of conservation and development interventions.
People reading this post might also be interested in my recent article titled “Using perceptions as evidence to improve conservation and environmental management” in Conservation Biology. See here: http://onlinelibrary.wiley.com/doi/10.1111/cobi.12681/abstract
In a similar vein to the argument in this post, I suggest that “Evidence is any information that can be used to come to a conclusion and support a judgment or, in this case, to make decisions that will improve conservation policies, actions, and outcomes.” and “Better incorporation of evidence from across the social and natural sciences and integration of a plurality of methods into monitoring and evaluation will provide a more complete picture on which to base conservation decisions and environmental management.” In this paper, I specifically focus on how perceptions can be incorporated into evidence-based conservation and environmental decision making.
As both an evaluator and a conservation psychologist I am not at all surprised by the results mentioned in the article. What I believe is missing in many assessments of conservation “actions” or programs is fundamentally understanding our human nature, what actually drives it (mostly unconsciously), and how heuristics and biases greatly control our decision-making. We are not the rational, logical creatures we think we are. For this reason the notion that “technological will fix things,” IMHO, will never work because of how humans interpret data and information…. always in ways for self-benefit first. It’s just how we are hardwired. Sure, there are ways around it but we have to embrace and understand human drivers completely and design programs and interventions with these as the core components.
Thanks, Chris, for a thoughtful and constructive post, acknowledging the very important contributions made by economists like Paul Ferraro, but also recognising the social and political context for impact evaluation in conservation. Although I’m on the quantitative side, and a colleague of Paul’s, I agree fully that we need a more integrated approach. This might begin with a qualitative theory of change for why certain kinds of conservation decisions are made by governments or NGOs. That would acknowledge your point that maximum conservation impact is frequently not the intended result. More often the intention is positioning electorally or within the national or global policy arena. But such an analysis might also shed light on ways in which the actual, and perhaps unstated, intentions could be managed to improve impact, as well as pointing conservation scientists to where they might make a difference. I also agree fully that social sciences and anthropology have a lot to offer, both in terms of designing conservation interventions to maximize social and ecological benefits and understanding why interventions produced results that were, or were not, anticipated. This seems to be the beginning of a process to map a broader research agenda for evaluation of conservation impact. It might be worth doing that soon to avert the outcome that worries you, i.e. that quantitative techniques will dominate unduly.