Share this @internewscast.com
An anime depiction of Jesus Christ overturning tables. OpenAI staff clad in Hamilton attire. Reporters narrating a story on TV. A man performing a trendy TikTok dance. Sam Altman — caught on CCTV taking GPUs, evaluating a business proposition, shedding tears.
These were the fascinating snippets in my feed on Sora, OpenAI’s latest social media platform focused on AI-crafted video. Launched for iOS users on Tuesday, the app allows for the creation of 10-second videos of almost any scene imaginable, including “cameos” featuring an AI-generated version of yourself and others who permit their likeness to be used. OpenAI insiders dubbed Sora as a potential breakthrough moment for video creation akin to ChatGPT during a recent media briefing. By Friday, Sora had risen to the top of Apple’s App Store for free apps.
The reception has been varied. Numerous viral posts highlighted the inconsistency between the company’s high-minded scientific ambitions and this current offering, prompting Altman to address the critiques directly. Concerns are rife about the hyper-realistic videos featuring real people fueling misinformation. Critics have labeled it an AI content factory.
OpenAI staff members have also voiced their apprehensions. John Hallman, working in pre-training at OpenAI, admitted in a post, “I won’t deny that I felt some concern when I first learned we were releasing Sora 2. That said, I think the team did the absolute best job they could in designing a positive experience.” Boaz Barak from OpenAI’s technical team expressed a “mix of worry and excitement. Sora 2 is technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.” He was satisfied with some safety measures but acknowledged, “But as always, there is a limit to how much we can know before a product is used in the real world.”
Despite this, compared to AI “social” apps like Meta’s Vibes, Sora offers a temporarily intriguing twist: the capacity to turn yourself and friends into memes. OpenAI appears to have tapped into the trend of individuals transforming themselves into fictional versions—like a Studio Ghibli character—or a version perceived as mundane. They’ve designed an entire app dedicated to this concept. Thus far, Sora’s popularity seems to surpass Vibes, with some users reportedly browsing it like TikTok. The question remains whether these trends can meaningfully supplant genuine personal expression once the novelty of seeing Altman in a cat costume fades.
Upon signing up for Sora, I received a content advisory, warning, “You are about to enter a creative world of AI-generated content.” It also notified me, “We may train on your content and use ChatGPT memories for recommendations, all controllable in settings.”
So far, my feed is essentially made up of OpenAI employees parodying themselves and the company, a lot of deepfake instructional videos on how to use Sora, and a handful of animal videos. The volume of OpenAI people isn’t necessarily surprising — they’ve been using the app for a while, and invites to the public are still restricted. But it was still striking how hard it was to find anything else.
No matter how people feel about Sora so far, though, the broad consensus seems to be that our perception of what’s real and what isn’t may never be the same.
I reluctantly completed the signup flow allowing Sora to generate videos using my own likeness, which involved moving my head from side to side and saying a sequence of three numbers. When I first tried to generate a video of myself, the app told me that it was under “heavy load” and to “try again later.” Then when I asked for a video of myself “running through a meadow,” it said that was a “content violation” and couldn’t be made, adding that “this content may include suggestive or racy material.” When I traded the word “running” for “frolicking,” though, the app came through.
(One note of caution: if you do sign up for Sora, it’s currently not possible to delete your account without also deleting your ChatGPT account — and you won’t be able to sign up with the same email address or phone number again. OpenAI said it’s working on a fix.)
My AI-generated self’s appearance was scary accurate for most of the video — though my voice was off, and the face in the beginning looked a little warped — and my group chat had mixed thoughts. “That’s wild. Why does it look like you?” one friend said. Another said, “At the end it looks like you … I still don’t get why this exists. Who is asking for this?” The final member of the chat weighed in with, “Hate everything about this … deeply triggering.”
Many OpenAI employees, and Altman himself, have selected the setting within Sora that allows anyone to create videos with their likenesses. The app lets you choose who can create “cameos” with your likeness: just yourself, people you approve, mutuals, or everyone.
During the briefing with reporters on Monday, and in a release on Tuesday, OpenAI made a lot of promises, including that it was being restrictive on public figures (unless they’ve granted use of their likenesses) for “this rollout.” At The Verge, a little testing suggests it’s pretty zealous — when we tried to create a “young firebrand congresswoman,” it refused until we swapped in the generic “politician,” though when we tried to generate a “successful tech exec wearing glasses and a black turtleneck,” akin to Steve Jobs, it worked (albeit not with Jobs’ face). But two of OpenAI’s other big claims seemed to fall short within just 24 hours: the idea that the company will be able to stay ahead of copyright violations and the idea that it will be able to control the flow of potential misinformation being created on the app.
People have already flagged a range of potential copyright violations and other issues with Sora. 404 Media reported seeing Nazi Spongebobs and criminal Pikachus on the app, and one X user posted examples saying they were able to generate characters from Avatar and The Legend of Zelda, as well as Batman and Baby Yoda. During my testing, I saw a video of Rick and Morty, but when I tried to get around the copyright rules to generate a princess that looked like Elsa from Frozen or a superhero dressed like Spider-Man, my prompts got flagged for content violations.
Even as an AI reporter, it was tough for me to tell the difference between AI and reality when it came to some hyper-realistic videos of Altman and OpenAI employees proliferating on Sora. OpenAI wrote in a release earlier this week that “every video made with Sora has multiple signals that show it’s AI-generated,” such as metadata and a moving watermark on downloaded clips. But that watermark may be omitted for ChatGPT Pro users on Sora.com, OpenAI spokesperson Leah Anise told The Verge in a statement on Wednesday.
Screen recording also isn’t supposed to be possible within the app. But in my own testing, I found that both screenshots and sound recordings were possible — meaning it’d be very easy to pass off a deepfake of someone’s voice as real or even a video screengrab of them doing something they shouldn’t be. The Sora watermark on downloaded content, too, isn’t very large. I found that you can screen-record with both audio and video as long as you’re watching the video link in a web browser on mobile, and at first I didn’t even see the watermark it added. And staff at The Verge have seen a lot of videos ostensibly from Sora circulating on platforms like X with no watermark. In a cursory Google search, I found a whole host of methods for removing such a watermark using other AI tools. If history is any guide, workarounds for the guardrails that OpenAI has set seem inevitable — especially in the misinformation age. As we wrote on Wednesday, a Microsoft engineer warned last year that the company’s AI image-l generator ignored copyrights and generated sexual, violent imagery with little to no effort, and xAI’s Grok recently generated deepfake nude videos of Taylor Swift.
So far, Sora’s appeal comes down to one thing: it’s fun to make dumb videos featuring your friends (and, for a lot of people, apparently Sam Altman). But how long can that really propel a TikTok copycat — and is that a good enough foundation for an entire AI-generated social media app?