Advertisement

OpenAI's new 'Sora' creates realistic videos from text

OpenAI has unveiled a new AI text-to-video generator called "Sora." This software can create original videos in seconds based on text prompts entered by the user. Sora represents a major advancement in AI's creative capabilities - by simply typing a description, people can now instantly produce dynamic video content.

Yahoo Finance's Madison Mills and Rachelle Akuffo breaks down the details.

For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.

Editor's note: This article was written by Angel Smith

Video Transcript

RACHELLE AKUFFO: AI generated movies could be right around the corner. OpenAI unveiling its new text to video model Dubbed Sora. Now it can generate videos for up to a minute long based on what a user prompts. And I mean, what we're looking at right now, this was barely even four sentences of a prompt to create this. You get multi shot angles, high definition, you can put in if you want it to be drone shots.

ADVERTISEMENT

If you want it to be slow motion, you basically put in the background, the location, everything. I mean, some of it's visually--

[INTERPOSING VOICES]

MADISON MILLS: It's beautiful.

RACHELLE AKUFFO: It's stunning.

MADISON MILLS: It looks like for, Oh, that is cute. That reminds me of my dog. I don't know if it reminds you of your dog. It's so cute and small. But it looks incredible and for people like us, it obviously leads to a question about what's going to happen to content creators in the industry. I feel like in our morning meeting, we were all a little bit stressed but also there's the other part of it where maybe it makes our jobs easier.

Maybe we don't have to spend hours and hours shooting B-roll and we can just type in traders or New York Stock Exchange. Somehow make that beautiful.

RACHELLE AKUFFO: I mean, it's true. I'm looking at some of the feedback here, Tim Brooks was a research scientist on the project. He said it, learns about 3D geometry and consistency. But he said they didn't actually bake that in. It was just entirely emerged from seeing a lot of data.

So really going to show just how much this can evolve, even some of these unexpected outcomes. I mean, if you look at some of-- I mean, some of these visuals here, you also have to wonder, though when you see images of people, intellectual property. What does that mean for that? Since this isn't an original generation here, it's based on this source data something that we continue to see here.

And also if you're a makeup artist, an actor, if this really ties in, especially on the heels of a newly negotiated deal for the Screen Actors Guild and for the actors and artists in the industry. What does-- I don't know if this is a level that they could have anticipated when they were trying to future proof their jobs here.

MADISON MILLS: It's such a great point and we should also mention that OpenAI did not release any of the details about what went into this, about all the ingredients that went into the secret sauce here, they're keeping that private. And so not knowing the extent of those capabilities may allow us to have a little bit of solace that maybe there's a lot more time between something like that being super usable for folks in our own newsrooms, around video and content creation more broadly.

Maybe we've got more time before we see super disruption. But I guess if it puts makeup on our faces the way a TikTok filter does, I'm not super opposed to that. I don't know how you feel about this, Rachelle.

RACHELLE AKUFFO: It's one of those things, if you suddenly like move to the left like I know I like to move a lot. And your makeup is still sitting like 5 inches to the right.

MADISON MILLS: Suddenly off the face.

RACHELLE AKUFFO: Nobody wants to see the before and after on that. It is worth noting though, they do have a team who is testing it for harms and risks at the moment. Because we have keep having this issue of this technology moving faster than regulators can keep up and some of the harms that we're still not aware of as we mentioned here from that research scientist. Something you can't plan for what you can't plan for.

So really at least being thoughtful about how this is being deployed. But this text to video, I mean, it's pretty incredible when you look at just the ultra 4K detail that you're seeing on this.

MADISON MILLS: You bring up a great point though, which is some of the risks and the CEO did mention that they have red teaming coming up for Sora. That's what they call the team of specialists that test their AI models, make sure that they're not open to big cybersecurity threats. They basically act like bad actors, try to make something bad happen with the tools in order to prevent against that in the future.

Saying that-- a spokesperson from OpenAI saying that they're going to assess critical areas for harms or risks. So interesting to see as we continue to hear about cyber threats across all industries, we were talking about that with DraftKings earlier. How is that going to play into this particularly when, to your point earlier, there are so many questions about the use of people's identities in this material moving forward?

What is that going to look like and how are they going to pressure test that against any cyber hacks moving forward?

RACHELLE AKUFFO: Indeed still a lot of questions. I mean, at least, I guess, if you're a creator, you've got some options there in terms of a quick start here. But a lot of questions indeed.