Advertisement

Meta exec on election misinformation: 'We are getting better" at fighting it

The White House brought together representatives from tech giants including Meta (META), Amazon (AMZN), Microsoft (MSFT), and Alphabet (GOOGL) to discuss artificial intelligence risks. As part of the event, the companies agreed to voluntary safeguards to better manage the risks posed by AI.

Meta's President of Global Affairs Nick Clegg attended the meeting. After its conclusion, Clegg joined Yahoo Finance Live's Seana Smith and Allie Garfinkle to shed light on what was discussed.

One of the big concerns about AI is how it could be used to spread misinformation or disinformation in a way that could impact an election. Clegg says that Meta's systems, including the ones related to AI, will "tackle" malicious or nefarious content "regardless of where its created, whether it's created by a human being or by a machine." Clegg adds that Meta has spent $16 billion over the last few years on defensive and protective systems and that "tens of thousands of people working on this around the clock."

When asked about the 2024 U.S. elections, Clegg points out that Meta has a large user base around the world and, as a result, is constantly having to prepare for elections, saying that "with each election we learn more, because it's a very adversarial space." "I do think we are getting better. We have constant practice in doing this," Clegg adds.

ADVERTISEMENT

Click here to see the full interview with Nick Clegg.

Key video moments:

00:0:05 Meta's playbook for the elections

00:01:46 How will users know that Meta has succeeded in mitigating misinformation?

Video Transcript

SIENA SMITH: Nick, when it comes to misinformation on AI, there's been lots of talk about how this could potentially influence or what the run-up to the election could look like now that we have AI and the mass adoption of AI. When it comes to Meta's role in this and how you're approaching the 2024 election because now you also have threads, which has over 100 million users. What does your playbook look like this time around?

NICK CLEGG: I think the key thing I'd like to say about this is that the way we build our systems, not least, our AI systems which-- which, which make sure that bad content, and nefarious content, whether it's disinformation, whether it's foreign attempts to interfere in elections, whether it's hate speech and so on, those systems will tackle content regardless of where it's created, whether it's created by a human being or by a machine.

So whilst the volume may change, the fundamental architecture of our protections, which we've built up over the years, and not least because of the experience of what happened back in 2016 and because of the immense advances in AI, when we've spent $16 billion over the last few years on those kinds of defenses and protections. We have tens of thousands of people working on this around the clock.

Those AI systems will tackle misinformation-- disinformation, where we can identify it regardless of its source. And that's, I think, important to remember. It's not as if-- as I say, the velocity and volume may change, but the fundamental power we have is because we are a platform that can control the distribution of that content, is as long as we can identify it, we can then restrict or even remove that content if we find it.

ALLIE GARFINKLE: Nick, this all totally makes sense to me. But my question is how do we know it's going to work? Let's fast forward 18 months, let's say. The election is over. How do we know that Meta has succeeded in mitigating misinformation?

NICK CLEGG: Well, look my crystal ball is no clearer than yours. All I can point to you-- So I can't tell you exactly what's going to happen in the future, and particularly not in something as unpredictable as elections. What I can reassure you about is that-- do you remember, we are a global platform? Over 95% of our users are outside the United States. And we have elections every single month in many different parts of the world.

So it's not as if we only ever have to prepare ourselves for the US election. We have teams. We have highly sophisticated automated models, which are working to deal with elections nonstop, all year round, month in, month out. And with each election, we learn more because it's a very adversarial space.

So I do think we're getting better. We have constant practice in doing this. We have made massive investments. I think world-beating and certainly industry-leading technology to deal with misinformation, disinformation, and so on. We give users as much information as we possibly can.

And as I said earlier, it doesn't really matter from our point of view, whether the material-- if it's unwelcome and we want to block it or restrict it, it's produced by a machine or a human being, we will try and tackle it. But look, so I can only sort of tell you that I think we try our level best, and we do so on an ongoing basis. But exactly how that will play out, well, that's something that obviously we can speak about after the elections themselves.