Contract tracing and preserving privacy: Mozilla Exec. on how to strike a balance
Tech companies are working to build digital contact tracing apps that will help track and prevent the spread of COVID-19. Mozilla Senior Director of Trust and Security Marshall Erwin joins Yahoo Finance’s On The Move discuss.
Video Transcript
JULIE HYMAN: Contact tracing is one of the important tools that has been talked about to fight coronavirus and to try to limit its spread. It's been used to pretty good effect in places like South Korea, depending.
And we want to talk about the complexity of doing that and also the security and privacy questions that are inherent in it. We're joined by Marshall Erwin. He is Mozilla's Senior Director of Trust and Security. He's joining us from Washington, DC.
So, Marshall, let's tackle the security question first here when it comes to contact tracing because it seems as though, particularly in Western societies like the United States, that that is one of the highest-concern items related to this. Is there a way to do contact tracing and still preserve privacy?
MARSHALL ERWIN: Yeah, so one thing that you hear a lot of people talking about right now are contact-tracing apps. These are tools that you would download onto your phone, and they collect data over time. And then if you were to be infected, you can get in touch with someone who's doing contact tracing, and they can take some of the data on your phone and use it to aid in their work.
So there are actually some serious privacy and security implications of this. There are a number of different designs that have been proposed. Some of them pose more serious privacy and security risks than others. Those are sometimes referred to as these centralized designs that take your network of people that you've come in contact with and essentially give that information to the government. And then there's some more privacy-sensitive approaches called decentralized approaches, and what those do is they collect data on your phone about the contacts that you have had, but they don't actually share that with the government. And this is sometimes referred to as exposure-notification apps. What these will do is they will tell you if you've been in contact with someone who might have been infected without actually this expansive data sharing with the government. That's a much more privacy-preserving approach to this problem.
DAN HOWLEY: Marshall, I just wanted to ask-- you know, we looked at Apple, Google. That's the kind of decentralized approach that you're speaking of. I know Germany and, I believe, France were looking at the centralized model. Germany ditched it because they just couldn't get Apple and Google on board.
I guess there are still some concerns though with Apple and Google's method. Can you kind of explain those and why people might be afraid to use them?
MARSHALL ERWIN: Yeah, so almost all of these designs share some amount of information with sort of a central authority. It's just a question of what information and how sensitive it is. And so the more centralized approach is share your entire-- a large amount of your contact information.
The more decentralized approaches, what they will do is they will share your infection status, but they don't share your contact information. And so Apple and Google, their design allows you to share maybe your infection status with the central authority without actually sharing that expansive contact network.
So the nice thing about that is that it doesn't share your sort of expansive network with the government, and it actually doesn't share it with Apple or Google either. So all of your information-- most of your information stays on the device. It isn't actually collected by Apple and Google. So it's a nice property of that system that attempts to preserve your privacy.
Right now, we're living at a point in time where, you know, there isn't a lot of trust in big tech companies. So the fact that they've designed this in a way that avoids them having to collect the data is actually a useful thing.
You know, for these tools to be effective, people have to opt into them. A large portion of your population actually has to use these contact-tracing apps. And the fact that people are so distrustful of big tech is going to make that harder, and so these approaches, the approach that Apple and Google have taken that avoids collecting the data, means that more people can trust that they've been designed properly and more people might opt in. So it's a nice actual property of their design.
RICK NEWMAN: Hey, Marshall, Rick Newman here. Isn't there something individuals can do to do their own contact tracing? I mean, you know-- I know where I've been better than anybody else, and I could go and tell people-- or at least there sounds like there could be some component of self-reporting to this.
MARSHALL ERWIN: Yeah, so that's fairly consistent with traditional contact tracing is you find out that you're infected. You get in touch with a health-care professional, a medical professional, and you can tell them exactly who you've been in contact with and help them do their job. So that's the traditional version of contact tracing.
What the more decentralized designs do is they actually-- they aid that, and they let you know, oh, I've been in contact with someone who might have been infected, and I should take action on my own to self-quarantine, to let anyone-- let my neighbors, my friends, my family know that I might have been in touch with someone who's been infected. So that's much more along the lines of what you're talking about is those decentralized models here.
DAN HOWLEY: Marshall, when you look at kind of these kinds of technologies and the self-reporting aspect, I think that's one of the things that Google and Apple haven't really been too clear on. How will these companies make sure that if you do self-report you're not just lying and doing it for fun?
MARSHALL ERWIN: Yeah, that's a major shortcoming of all of these designs, whether it's the sort of Apple and Google approach or some of the other more centralized ones is what we think of in the sort of the security community as, like, an attacker, someone who's trying to do something malicious with the design to undermine its intent. And there are a lot of different ways to sort of falsely self-report or manipulate these processes that could kind of pollute the data and make it harder for people to take appropriate action when they are alerted that they've been exposed.
One of the benefits of the more centralized approaches-- I think the one that you're alluding to-- is the fact that you can alert a health-care professional, and then the health-care professional can do their own diligence to know like, oh, is this person for real? Are they trying-- are they sort of reporting a real exposure, or are they trying to be malicious or do something to undermine the system? And so that is one of the benefits of the more centralized approach is that it puts power in the hands of those health-care professionals to do their own diligence, their own investigative efforts to understand exactly when you're reporting to them, you're being truthful.