Share this @internewscast.com
Last week, OpenAI unveiled its newest AI-generated video app, Sora, which originally came with a policy allowing media companies to opt-out if they wished to keep their AI-created characters off the platform. However, following incidents involving characters like Nazi SpongeBob, criminal Pikachu, and philosophical portrayals of Rick and Morty, OpenAI CEO Sam Altman decided to change the approach, allowing copyright holders to choose whether or not to participate.
Addressing why OpenAI altered its policy, Altman mentioned that discussions with stakeholders influenced the decision, and he hadn’t anticipated the backlash.
Altman noted, “The perception of what it was going to be like versus the reality surprised some. People found it more distinct from images than they had expected.”
Sora resembles TikTok in that it provides an endless feed with the feature to craft 10-second videos, complete with audio. Users can create clips of just about anything, even their own AI-created likenesses or “cameos,” including those for whom they’ve obtained permission. The app attempts to limit portrayals of people not on the platform, yet the text prompts proved highly capable of creating copyrighted characters.
While discussing the response, Altman said that though many rightsholders are enthusiastic, they desire “more controls.” He added that Sora “became very popular very rapidly. We expected to slow its growth, but that didn’t occur.”
“We deeply care about rightsholders’ and individuals’ needs,” Altman emphasized. “Our plan is to implement these extra controls, and you can expect to see numerous major content pieces accessible, albeit with restrictions on their usage.”
Among the system’s early adopters, Altman said he was surprised people would have “in-between” feelings about allowing people to make AI-generated videos using their likenesses on Sora. He said he expected people would either want to try making their cameo public or not, but not that there would be so much nuance, and so that’s why the company recently introduced more restrictions. A lot of people have changed their minds about whether or not they’d like their cameos to be public, Altman said, but that “they don’t want their cameo to say offensive things or things that they find deeply problematic.”
Bill Peebles, OpenAI’s head of Sora, posted on X Sunday that the team had “heard from lots of folks who want to make their cameos available to everyone but retain control over how they’re used,” adding that now users can specify how their cameo is used via text instructions to Sora, such as “don’t put me in videos that involve political commentary” or “don’t let me say this word.”
Rightsholders want ‘a lot more controls’ on Sora
Peebles also said that the team is working on ways to make the Sora watermark on downloaded videos “clearer and more visible.” Many people have raised concerns about the misinformation crisis that could naturally come from hyperrealistic AI-generated videos — especially when the watermark denoting them as AI-generated isn’t very large and can be removed easily, according to video tutorials proliferating online.
“I also know people are already finding ways to remove it,” Altman said of the watermark during the Q&A Monday.
During Altman’s keynote speech at DevDay, he said the company was immediately releasing a preview of Sora 2 in OpenAI’s API, allowing developers to access the same model that powers Sora 2 and create ultra-realistic AI-generated videos for their own purposes, ostensibly without any sort of watermark. During the Q&A with reporters, when asked about how the company would implement safeguards for Sora 2 in the API, Altman did not specifically answer the question.
Altman said he was surprised by the amount of demand for generating videos solely for group chats — i.e., for sharing with just one other person or a handful of people, but not more widely than that. Although that’s been popular, he said, “it’s not a great fit for how the current app works.”
He positioned the launch’s speed bumps as learning opportunities. “Not for much longer will we have the only good video model out there, and there’s going to be a ton of videos with none of our safeguards, and that’s fine, that’s the way the world works,” Altman said, adding, “We can use this window to get society to really understand, ‘Hey, the playing field changed, we can generate almost indistinguishable video in some cases now, and you’ve got to be ready for that.’”
Altman said he feels that people don’t pay attention to OpenAI’s technology when people at the company talk about it, only when they release it. “We’ve got to have … this sort of technological and societal co-evolution,” Altman said. “I believe that works, and I actually don’t know anything else that works. There are clearly going to be challenges for society contending with this quality, and what will get much better, with the video generation. But the only way that we know of to help mitigate it is to get the world to experience it and figure out how that’s going to go.”
“There are clearly going to be challenges for society contending with this quality”
It’s a controversial take, especially for an AI CEO. For as long as AI has been around, it’s been used in ways that disproportionately affect minorities and vulnerable populations — ranging from wrongful arrests to AI-generated revenge porn. OpenAI has some guardrails for Sora in place, but if history — and the last week — is any guide, people will find ways to get around them. Watermark removers are already proliferating online, with some people using “magic eraser”-type tools and others coding their own ways to remove the watermark convincingly. For now, text prompts don’t allow for generating a specific face without permission, but people have allegedly already gotten around that rule to generate close-enough approximations of someone to instill fear or make threats, and to make suggestive videos, including videos of women holding dildo-like objects.
When asked if OpenAI’s plans translate to a “move fast and break things” approach, Altman said, “Not at all,” adding that user criticism of Sora right now leans toward saying that the company is “way too restrictive” and “censorship.” He said the company was starting the rollout conservatively and “will find ways to allow more over time.”
Altman said turning a profit on Sora was “not in my top 10 concerns but … obviously someday we have to be very profitable, and we’re confident and patient that we will get there.” He said right now the company is in a phase of “investing aggressively.”
Whatever the initial difficulties, Greg Brockman, president of OpenAI, said he was struck by the adoption curve for Sora — and that it was even more intense than that of ChatGPT. It’s stayed consistent at the top of the list for free apps in Apple’s App Store. “I think this points a little bit to the future — this thing we keep coming back to: We’re going to need more compute,” he said. “To some extent, that’s the number-one lesson of the [Sora] launch so far.”
It was essentially a pitch for Stargate, OpenAI’s joint venture with SoftBank and Oracle to bolster AI infrastructure in the US, starting with a $100 billion investment and adding up to $500 billion over four years. President Donald Trump has championed the venture, and OpenAI has announced a handful of new data center sites in Texas, Ohio, and New Mexico. The energy-hungry projects have been controversial, and they’re often able to run with staff of a couple hundred people after initial construction, despite promises of large-scale job creation.
But OpenAI is full speed ahead. On Monday, it struck a deal with chipmaker AMD that could allow OpenAI to take a 10 percent stake. Hours later, in the Q&A with reporters, Altman was asked how interested the company is in building its own chip. “We are interested in the full stack of AI infrastructure,” he replied. At another point, he told reporters they should “expect to hear a lot more” from OpenAI on the infrastructure stack.
During the session, OpenAI executives over and over again emphasized the lack of compute and how much it can block OpenAI and its competitors from being able to offer services at scale.
“Asking, ‘How much compute do you want?’ is a little bit like asking, ‘How much of the workforce do you want?’” Brockman said. “The answer is you can always get more out of more.” And right now, more capacity for deepfaking your friends is the latest selling point.