Over the past few years in academia, as well as with technology and journalism organizations, I’ve conducted research to examine how at-risk groups manage their privacy and security online. These groups include U.S. journalists, as well as activists in international NGOs, and the civil society groups that support them. And while there’s room to grow, U.S. journalists benefit from strong legal protections compared to much of the world. Wherever tightly controlled state media is the norm, independent journalism essentially is activism.
Media activists affect political change by funneling citizens news outside of state controlled media channels. They share their messages through independent news sites, blogs, social media, and word of mouth. If they’re doing a good job, they’ve probably pissed people off — in particular, the government in the region where they operate. Depending on the government, reprisal against media activists can be severe. In some places, people are passively monitored, or harassed. In others, media activists have been detained indefinitely, tortured, or worse. The same threats loom over their friends, family, and colleagues.
If you’re a researcher, speaking with media activists and other groups targeted for surveillance will require a great deal of care.
Researchers need to be cautious minimizing harm to participants. In some situations, we even have opportunities to actively help them work more safely.
Here are some things you want to know before diving into your research.
Recruit for safety
Because media activists are often the targets of surveillance, in some circumstances, it can be dangerous to contact them directly. It’s smart to know who the activists’ adversaries are and their capabilities before contacting them for your research.
At the same time, how do you learn about these communities and their concerns without talking to them?
The first thing to do is talk to civil society groups that understand the privacy and security threats faced by the communities you’re interested in (e.g., the Freedom of the Press Foundation, Access Now, Committee to Protect Journalists, Electronic Frontier Foundation, among others).
These groups conduct security trainings, create networking events, and sometimes provide tailored legal help. They work hard to understand threats among communities they assist. Civil society groups are also typically enthusiastic about calling attention to these issues, and can therefore be invaluable to understanding threats to independent journalists and media activists.
By talking to civil society groups, you’ll also find revealing differences between “best practices” recommended in security trainings, and the reality on the ground. Start by asking what security practices media activists could do better.
After speaking with civil society groups, ask who else they think you should speak with. If you’ve developed enough trust with civil society contacts, these groups are also well-equipped to help you make connections with media activists in a safe and direct way.
Read, read, read
It may sound obvious, but read every report you can about the threats faced by your community of interest. For people in academia, be prepared to venture beyond traditional articles. Look into policy papers, reports from advocacy organizations, and whitepapers on technical vulnerabilities and exploits. You can also get some background by reading existing security resources and guides.
The privacy and security concerns among media activists change with the political climates of their work, but we often rely on the same file formats, operating systems, and Web infrastructure. As a consequence, technical attacks that work in one context work in many.
For example, CitizenLab documented how the same targeted malware is used in several countries to monitor activists, journalists, and dissidents. Likewise, the same tools are used by actors with entirely different goals. The same Remote Access Trojans used by teens who want to peek at girls’ webcams are used by online thieves looking for bank credentials, and governments to monitor dissidents. The attacks are similar, but the outcomes are very different.
Get creative with data collection and storage
Some of the groups involved in this space are wary of proprietary software and public telecommunications, which can often be decrypted by the service provider. This will require you to get creative about speaking securely.
Being a sponge for related literature will help understand others’ methods for overcoming security hurdles. For example, Internews conducted a study about perceptions of online security among journalists and bloggers in Pakistan. Unlike most surveys, however, to help defend their participants from surveillance, the researchers gathered most of their responses by asking questions in face-to-face “survey interviews.”
I’ve personally invested a lot of time into understanding self-hosted survey tools (e.g., SandForms, LimeSurvey). Why? If I don’t have to give data to a third party, I control it. If done right (and this is a big *if*), I can better protect my participants’ data.
But this is challenging because the open source tools available aren’t always well documented. They don’t always play nice with anonymity tools that can protect participants, such as Tor Browser. And since they typically don’t have well-paid engineering teams behind them, they aren’t the most powerful tools out there.
So we need to consider, is the company hosting your data part of your threat model? Is the company hosting your data part of your participants’ threat models?
Likewise, if you’re working with sensitive interviews, consider transcribing them and deleting the recordings. Unless you’ve got a transcription service you trust, this probably means transcribing interviews by hand — a painful process that takes 3-4 hours for every hour of interview recordings. It may also be worthwhile to use pseudonyms in your transcriptions. If you do, you will need to create a system to keep track of a the relationships between pseudonyms and real names.
Once upon a time, I used to memorize my participants’ codenames. But as I did more and more interviews, this strategy didn’t work. I needed to come up with a system to document these codenames, and that meant finding ways to store my documentation securely.
Companies that host your data may be compelled to share user data in response to legal requests in their jurisdiction. That can be a problem if you’re talking to someone who may be the target of direct surveillance now or in the future. Keeping local backups on a secured device (e.g., with an encrypted external hard disk) is a solid option for protecting your data. If you really need to back up your data using someone else’s server, consider a zero-knowledge service that cannot decrypt your data (e.g., SpiderOak). Otherwise, consider encrypting the data before backing it up (e.g., with GnuPG).
Meet where your interviewees are
Media activists, as well as the civil society communities that support them, are often trained to consider risk when speaking over different channels. Be attuned to their concerns.
It’s possible that they’re not very concerned about the sensitivity of your interview, or the fact that you spoke. It’s also possible that they’re speaking from a position of relative safety, and they would be fine with a regular phone call. But if they’re concerned about the sensitivity of your interview, you’ll need to be familiar with many types of secure communications channels.
- Secure messaging, as well as voice and video calls (e.g., Signal)
- Encrypted video (e.g., Jitsi Meet)
- Encrypted email (e.g., with GnuPG)
- If possible, leave cyberspace and speak in meatspace.
Sometimes the content of a conversation can be sensitive, but sometimes it’s the fact that you’re having a conversation at all.
Encryption only protects the content of your communications, but metadata (who spoke to who, when, and for how long) is typically to anyone in the pathway of your communications. Maybe that’s your ISP (e.g., Comcast), telecommunications provider (e.g., Sprint), or anyone else that has access to the underlying infrastructure.
If metadata is a concern, consider using tools that help muddy your conversations’ metadata with Tor (e.g., Ricochet).
Meeting in person works too, but you need to organize the meeting somehow. For example, if you had a phone call to discuss your meeting place, you’ve already produced a metadata trail. This is sometimes called the first contact problem, and there’s no easy fix.
Many of the above tools are open source projects, but proprietary options sometimes make sense for the threat model. For example, I spoke to a technologist who told me how Facebook has been helpful for protecting dissidents in Syria. The government has poor diplomatic relations with the United States, and the U.S. is unlikely to compel Facebook to turn over related user data. Facebook is also unlikely to do so. In other words, Facebook makes sense for this dissident’s threat model.
The truth is, even in iffy situations, people use secure channels selectively because they have other concerns that can compete with security. For example, in previous research I found that many U.S. journalists still preferred regular old phone calls, text messages, and emails for routine work. Encrypted emails slow down their work, and they’re (always) approaching the next deadline. Unless they are working on highly sensitive stories, or with highly sensitive sources, they often prefer the fastest and easiest communication channels.
Be mindful of your participants’ other needs beyond security.
Don’t ask questions on an unsecured channel if you’re not prepared to hear the answer. In turn, many people are more guarded about what they share through unsecured channels. There are times when it may be ideal to speak over a secured channel, especially if you want to get more candid answers with a privacy-conscious interviewee.
Intervention versus observation
Sometimes we’ll hear about familiar privacy and security challenges, and we may even be in a position to assist media activists by sharing what we’ve learned in our research. Maybe we’ve encountered solutions to a security problem they’ve described, or we’ve thought of connections with people they should talk to.
With few exceptions (e.g., action research), scholars doing qualitative work are often trained not to intervene in problems we’ve identified in our research. We’re trained to listen, we observe, and we probe when we’re curious about something new.
The logic goes something like this: If you are intervening in the subject of research, you’re not able to observe it.
Yet in qualitative work, the researcher becomes the instrument for collection and analysis of data. Researchers can’t observe or analyze peoples’ interview responses without also using personal judgment when crafting the interview questions, and when analyzing interviews. Finally, having our history and physical characteristics influence how participants respond to us. Interview responses are partly shaped by who we are. In other words, researchers are interacting with the world, not observing it.
If you’re a researcher and you know something that can help media activists to work more safely, do the right thing and share what you know or make connections that can help to protect them. However, as with all good interviews, avoid leading (using prompts which strongly bias their responses). It’s easy to derail interviews with poor timing. If you have solid security resources to share, offer them at the end of an interview. It’s worth noting that the “post-observational” segment of the interview can also be rich with observation. Keep a notebook ready.
Operational security in interviews
One of the biggest challenges with researching privacy and security conscious communities is that they’ve often been trained not to say anything of substance. If they’re being cautious about operational security, they won’t tell researchers a juicy personal story about how they exfiltrated data across borders, or specific tools they’ve used to stay off the radar of their government. Giving that information may be risky and the trade-off is not necessarily apparent to them, especially if they don’t know you very well as a researcher.
When I first got started conducting interviews with journalists, I made the mistake of quickly jumping into sensitive privacy and security questions. As I kept doing more interviews, it became apparent that many of my interviewees withheld a great deal of information. However, people will tell stories with sufficient distance — stories from long ago, or anecdotes about someone they know. That’s a good starting point.
To move beyond learning from stories “at a distance,” you may need to chat informally with your before ever recording an interview. Help participants feel more comfortable. Get to know them.
It’s also important to give the community opportunities to critically interrogate your research. Going to conferences, hanging out on Twitter, and participating in mailing lists can help to create opportunities to chat informally as well. In the end, it’s not about extracting information — it’s about building trust and learning alongside your community.
I’m always interested in meeting more people who are pursuing related research. If this work is exciting to you, let’s chat.
Updated September 13, 2019.