Advertisement

AI could cause catastrophic rather than existential risk, scientist warns | The Crypto Mile

On this week's episode of Yahoo Finance's The Crypto Mile, our host Brian McGleenon sits down with cognitive scientist Gary Marcus to discuss the reality and risks of artificial intelligence (AI). While the term "p(doom)"—referring to the probability of a human extinction event due to AI—has been causing a stir online, Marcus brings a more nuanced perspective to the conversation. Debunking apocalyptic forecasts, he explains that AI tools, such as ChatGPT, are unlikely to trigger human extinction but may still pose catastrophic risks. He delves into the term "p(catastrophe)," highlighting the chance of incidents that could significantly impact the human population.

Video Transcript

BRIAN MCGLEENON: On this week's episode of Yahoo Finance's The Crypto Mile, we are joined by Gary Marcus, scientist, bestselling author, and influential voice on artificial intelligence. Today, we'll discuss how useful generative AI really is, and whether the rapid advancement of this technology is groundbreaking or just hype.

Finally, we'll ask Gary about the existential risks that have been attributed to the rapid advancement of AI developments that some commentators suggest could lead to p(doom), the probability of an extinction level event caused by this technology.

ADVERTISEMENT

[AUDIO LOGO]

Gary, welcome to this week's episode of Yahoo Finance's The Crypto Mile.

GARY MARCUS: Thanks for having me.

BRIAN MCGLEENON: Now, is generative AI living up to the hype?

GARY MARCUS: Yes and no. So there's a lot of hype there. It does some genuinely useful things. Is it living up to all that hype? Yeah. So the best thing I think it does right now is it helps computer programmers type faster. It kind of looks things up for them, it structures the code a little bit. It's not perfect, it makes mistakes but coders are used to fixing errors. They call that debugging. You don't become a coder if you can't do it. And so it's a pretty good use case.

Whereas if you type in and you want medical advice from it, it's going to tell you stuff and it might make stuff up. If you ask for travel advice, it might send you to a place that doesn't even exist. It'll be perfectly grammatical, it'll be fun to play with, but it's not really all there yet.

BRIAN MCGLEENON: Well, you know, I was looking at some online data. And since May, visits to ChatGPT, that website, have declined by 29%. So do you think this signals something about the efficacy about generative AI in general?

GARY MARCUS: Well, some it's kind of novelty, kind of fad thing. Like, you know, you're probably not quite old enough to remember Pet Rocks, and some of your audience will and some won't. But you know, Pet Rocks were super popular for a while, and Furbies and Tamagotchis, and millions of things become popular. People play with them, they're exciting, and then they're like, yeah, OK, I get it.

I think that in the initial period, everybody wanted to try ChatGPT and everybody wanted to be funny. Hey, look, I wrote this thing with ChatGPT. But it's not really funny anymore. Hey, I wrote this thing with ChatGPT, like you know, now you want to know does it really work for you. And so like a lot of banks and Apple and whatever have tried it internally and they're like, there are problems with data leakage where customer data comes out, there are problems with what we call hallucinations. Maybe we should have called them confabulations when it just makes stuff up.

And so you know, it's really a case by case basis. Does this really work for me? If you are writing fiction and you have writer's block and you want some weird idea, maybe it'll give it to you. It can be useful for brainstorming. But not everybody is going to do that. The other thing is some of the drop is probably like college students and high school students went away for the summer, and they'll may come back to one of the open source alternatives that's free.

And so it's not totally clear what the long-term picture is. OpenAI is projected to make a buck billion this year but that projection, like we don't have the dates, like did that come before the peak or after the peak? And we don't know, like some of it was a lot of businesses felt like we have to try this out. Now some of these businesses are saying, well, yeah, this looks like a five-year project to get this to actually work so that we can trust it.

BRIAN MCGLEENON: So do you think that initial enthusiasm was really like built on the hype?

GARY MARCUS: Well, I think there was an insane amount of hype in the beginning, and there were a few of us like me saying, hold on, slow down, there are problems. These things make stuff up. They're not reliable. There are data leaks. You can't really trust them. But we got 2% of the voice. 98% of the voice was like, oh my god, I can't believe this is so amazing.

And the reality is somewhere in between. It can be amazing depending on what you do and it can be pretty lousy and not trustworthy depending on what you do. So we've had like multiple incidents where media outlets tried to use it to write their stories and they always wind up writing stories that have lies in them. And if you're the press, that's really not a good thing.

BRIAN MCGLEENON: If generative AI is relatively ineffectual for like a multitude of tasks, why is there the need for an AI moratorium? Why do we have this sort of sense that we could be building a misaligned, potentially deceptive, AI that could harm humanity?

GARY MARCUS: When some people called for a moratorium, they only-- and I signed a letter that called for it. The only thing that they were calling for was a moratorium on one thing, GPT-5, which we know is going to be unreliable and problematic, and so forth. Nobody said let's stop researching AI all together. So those of us who signed a letter said, let's try to make AI that's more trustworthy and reliable and doesn't cause problems. We didn't say don't make AI all together. Like no serious person is saying that.

And certainly that letter that many thousands of people signed was not calling for an outright moratorium. So that's the first thing to understand. We do need more research into making AI that we can trust, that doesn't make stuff up. Then the larger issue that you're asking about, some people joke about it, they call it p(doom). What is the probability that this will kill us all? I don't think these machines are literally going to extinguish the human species. There are actually people who have made that argument, even argued that it's inevitable that they will because they'll be smarter than us, they'll want to kill us.

I think that that's not a good argument. Machines may eventually be smarter than us, really, there are many dimensions to intelligence. That doesn't mean they're going to be motivated to want to kill us and so forth. But are a lot of risky applications. So bad actors are already using this stuff. They're already making deepfakes. They're going to try to manipulate the market. That might lead to an accidental war even an accidental nuclear war. So there are lots of things to be worried about. Literally, annihilating the species not very likely.

It's really hard. We're a very persistent species. We're probably not going anywhere. But we could have what I would call catastrophic risk rather than existential risk. So existential is like we just disappear. I don't think that will happen but there could be catastrophes that come out of AI. And the fundamental problem is that the tools we're using right now are called black boxes, which means we don't understand what goes on inside them. It makes it hard to debug them. And we can't make guarantees around them.

So like airplanes now for certain regimes of travel and so forth have guarantees, formal guarantees that the software will work to do a particular thing. In the black boxes that are popular right now, these large language models, there are no formal guarantees of anything. In fact, like one week you might ask, is 7 a prime number? And it gives you the right answer. And the next week, it gives you a wrong answer.

Or maybe 7 is OK because there's enough data for it, but maybe 56,317, maybe it'll get right and maybe it won't. And it might get it right on Tuesday and not Thursday, and so forth. So there's inherent instability or instability, I should say, in these systems that really makes them hard to engineer around. And safety is one of the things you should always engineer around. We don't really know how to do that yet.

BRIAN MCGLEENON: So that kind of randomness of like a word generator, the predictive text on a mass scale, does that is a certain approach to creating artificial intelligence. But is there another approach where you sort of like you don't use this sort of randomized predictive word stuff and you go way, way back at logical sort of modeling and stuff like that?

GARY MARCUS: There is, in fact, another long-standing tradition in AI called symbol manipulation. It goes back to the earliest days in the 1950s. And many of us think that there's still value in that. And actually, one of the greatest proponents of it, Doug Leonhardt, just died a few days ago. And he built this project over 40 years that tried to build common sense into AI in explicit verbal interpretable form. And I don't think his project entirely succeeded but I think it hinted in the right direction.

And ultimately, we're probably going to need to reconciliation between the statistical predictive text approach that's very popular right now but very flawed and the symbolic approach that has problems of its own. So it can get unwieldy, nobody's ever built it at the same scale. So we're missing some, I think, fundamental ideas about how to bring together those two worlds. Eventually, we'll get there. And I think, eventually, I will pay off its promise. We will have it solve medical problems like Alzheimer's and depression that we humans have not been able to figure out. But right now, the AI we have isn't really ready to do that.

BRIAN MCGLEENON: Do you think it's sad when people say things like as written this song or it's written this script for a comedy and it made me laugh, is there a little bit in you that feels a little bit sad that that's kind of a little erosion of what it means to be human?

GARY MARCUS: I mean, I could see that. Right now, I don't think you're going to actually get a good comedy out of one of these systems. You can get short little poems that are kind of doggerel that are amusing and stuff like that, but I haven't yet seen a great work of art come out of one of these machines. And I don't think that that's going to happen anytime soon. It might happen eventually.

BRIAN MCGLEENON: Well, Gary Marcus, thank you very much for coming on this week's episode of Yahoo Finance's The Crypto Mile.

GARY MARCUS: Thanks a lot for having me.

[THEME MUSIC]