Over the last few years the conservation movement has been enthusiastically deploying new surveillance technologies that make it possible to monitor and protect the natural world in ways once unimaginable. There are camera traps that can send live images of warthogs, lions and blurry things with legs direct to your desktop. There are unmanned aerial vehicles (or drones) that buzz overhead, filming orang-utan nests or measuring forest loss. There are tiny tracking devices that can be fitted to wild animals, allowing them to be followed from space as they wander around the Kenyan plains or fly across the ocean. And there are computer programmes that can predict the behaviour of poachers and send drones out to intercept them.
All of this is very exciting for conservationists. New gizmos promise better and cheaper data that can be used to monitor populations and understand threats, and new ways to tackle those threats. What’s more, some of the outputs of the technology are visually appealing and easily communicated to the general public through websites and smartphone apps, meaning that technologies can also be used to raise funds and promote public awareness.
Clearly new technology has a lot of potential for conservation, and websites like the newly launced WildLabs.net showcase the range of applications that are being developed. But is there another side to this story? Are there any potential risks or dangers lurking in the shadows as conservation rushes to deploy the latest gadgets?
In my view, the answer to this question is yes. Conservation technologies come with a range of risks that are connected to how they relate to people – what you might call the ‘social life’ of these technologies. These problems can be thought of in two broad categories: harm done to people, and harm done to conservation effectiveness.
An important concern in terms of harm to people is privacy. When might surveillance technologies used by conservation be thought of as invading the privacy of people who might be observed, whether deliberately or otherwise? Should people be given advanced warning that they might be filmed, photographed or listened to? Is it ok to monitor everybody, even when there is no reason to suspect they are doing anything illegal? What might be done with the data collected by these technologies, and who should have access to it? Should we be concerned about the right to privacy of non-human species? All of these questions are asked on a regular basis when thinking about the ethics of social research projects, but it seems that very little attention has been given to them so far in the conservation context. Given the negative publicity associated with drones and mass monitoring by states around the world, it seems likely that many people living and working in areas where surveillance technology is deployed will not be happy about it. Indeed, in several cases drones used for conservation have already been shot down, and camera traps deliberately vandalised or stolen.
This brings us on to the second problem, which is how conservation technology might undermine the very objectives it is intended to support. It is a well-established orthodoxy in conservation that projects work better when local stakeholders are supportive of conservation goals, and many conservation organisations invest a great deal of energy and resources into stakeholder consultations and education campaigns to garner such support. If the use of technology is seen as dangerous, threatening or an invasion of privacy by local people, important relationships with local communities may be damaged, with negative consequences for conservation. In areas that have a history of conflict between local people and conservation (for example due to evictions or the loss of livelihood opportunities), unwanted surveillance technologies might rekindle old enmities and return conservation to the unfortunate ‘fortress’ reputation that many have tried hard to shake off. This would be a backwards step and could reduce rather than increase the chances of conservation success.
If I am right and surveillance technologies used by conservation have the potential to do harm to people and in some cases undermine conservation goals, what should be done about it? Certainly I would not advocate getting rid of such technology altogether – it really does have great potential. Instead, I believe there is a need to do three things: recognise the problem, carry out research into how the social impacts of conservation technologies play out in practice, and develop some careful processes for regulating their use.
The first step is to raise awareness that new technologies are not socially and politically neutral. It is quite worrying that even very recent articles reviewing the uses of new technologies for conservation give ample room to discussing technical challenges but say nothing about social issues (other than the risk of data falling into the hands of criminals). For example, Stuart Pimm and colleagues are excited about the potential of drone ‘swarms’ to maximise surveillance. They see challenges in the costs and maintenance of these devices, but say nothing about how they may affect those on the ground. Overcoming ignorance of this issue is an important priority and hopefully my writing on this issue can make some small contribution to that process.
Research into the social impacts of surveillance technologies could investigate how local people perceive different kinds of technology (e.g. are rotary wing drones more scary than fixed wing drones?), how perceptions are influenced by awareness raising campaigns, and whether there are conditions under which the use of surveillance technology is more likely to generate a hostile reaction (e.g. in places with a history of conflict?). In some cases experimental research designs could be implemented in which different combinations of technologies and mitigation measures are deployed to find out how they differ in their social results. In other cases detailed analysis of sites where surveillance tools have already been deployed could be used. The results of all these research approaches would provide a wealth of valuable information that could inform the future use of technologies in conservation.
In terms of regulation, legal frameworks governing the use of surveillance technologies are wildly divergent between countries. In some places technologies are completely banned (such as the use of drones in US national parks), whereas in others there are no relevant laws at all. It seems unlikely that the legal environment will become harmonised any time soon, so instead I believe the conservation industry should get together to develop its own best practice guidelines to minimise negative social impacts of technology. These could build on existing frameworks such as the Conservation Initiative on Human Rights, an agreement designed to ensure that conservation organisations respect human rights that has been signed by many leading international NGOs.
New surveillance technologies are an exciting breakthrough for conservation. But in the rush to deploy new gadgets in the field, it is important to remember that they will have social and political as well as environmental consequences. Failing to take these issues into account could lead to serious unintended consequences for conservation. I hope that this article can play a part in stimulating a much-needed debate on these issues and how they can be tackled.
An extended version of the argument in this blog can be found in a newly published Open Access article in the journal Ambio. A near-identical version of this blog can be found on the newly launched conservation technology network WildLabs.net
One thing that hasn’t been discussed is the research ethics implications of camera traps and drones. Whilst universities have measures in place for any research intentionally involving human participants, such as interviews or surveys of people living around protected areas, there are few if any measures or protocols for dealing with inadvertent capture of human data, such as camera traps set up for photographing animals which end up capturing images of people inside protected areas. Do the biologists who set up these camera traps or fly the drones have an idea of what they will do if they end up with images of people engaged in what are suspected to be illegal activities, with the facial and other identifying features clearly visible? Have they fully considered the ethical and legal implications of either destroying these images or passing them on to the authorities? From my own experience of both researching illegal activities in conservation, and of sitting on an research ethics board at my university, I would be surprised if researchers are thinking about this in any detail. If I plan to research illegal activities in a protected area by using interviews or surveys, I have to go through a rigorous internal ethical review, but not if I am using camera traps to capture animal data but end up capturing human data.
As an aside, I find it telling that biodiversity journals have strict ethical protocols on sampling animals in the research that they publish, but not on human data collection
Thanks for this George. I try to make a similar point about research ethics in the full Ambio paper. Interestingly, I was recently sent a draft set of ethical guidelines created by Parks Canada for the use of human image data captured by remote cameras, so at least some conservation organisations are starting to think about this. Others seem to have ad hoc policies for blurring or deleting human images (ZSL’s Instant Wild camera trap project do this for example) but I don’t think anything quite so structured as a full blown ethical review process. Perhaps another thing to add to the wish list for the future!
Hi – noticed this article as you linked to a project I’m involved with where we had camera traps stolen. I’m not sure the context of your referral to the theft we suffered was completely accurate – it was part of a community and local county council-approved project, involving local people in their green spaces and wildlife. The people that stole our cameras were vandals that were also setting benches on fire and committing other petty acts of vandalism. Indeed the story was in the local papers and created a huge response, which was overwhelmingly one of disappointment. My point is that these weren’t people annoyed with surveillance – just a few juvenile delinquents spoiling it for everyone else.
Also, the Data Protection Act and the Information Commissioner’s Office regulate our camera trapping activities – it is in essence CCTV, so proper signage (i.e. people are warned) and protocols are in place for us to deal with any images of people. If they are committing a crime, then those images are passed on to the police as you would expect. If not, they are deleted immediately by employees. We have an appointed Data Controller too. If we captured images of a person committing a crime without proper CCTV signage and then passed those images onto police, we’d be prosecuted for breaking the Data Protection Act. So most of the concerns you outlined in the 4th paragraph are covered by British law as far as I can tell.
Hi James – many thanks for your comment and clarifications. I’m sorry if my citation of the vandalism example was slightly misleading and I’ll remove the link to your project from the article. I don’t think it invalidates the general point, but clearly it isn’t right to cite a supporting example that doesn’t fit. On the regulation point, yes you are right that the UK (and some other countries) have strong rules in place about surveillance technology. But there are many places where the rules are non-existent or not enforced, which leads me to argue that some self-regulation around universal principles would be a good idea for the conservation sector. With thanks again for taking the time to comment – Chris
Hi Chris,
Thank you, that’s appreciated. No I don’t think it does invalidate the point, just I wasn’t sure that illustrated the point properly.
I do agree that some self-regulation is important for the conservation sector and found the article very interesting.
Thanks again,
James