Ancestra says a lot about the current state of AI-generated videos
Share this @internewscast.com

After viewing Eliza McNitt’s latest short film, Ancestra, it’s clear why Hollywood studios are intrigued by generative AI. The film, created in collaboration with Google’s DeepMind, features shots crafted strictly through prompts. This process highlights what Darren Aronofsky’s AI-centric Primordial Soup and Google can reap from embracing such creative methods. Yet, as McNitt and Aronofsky delve into the making of the short, one can’t help but consider generative AI’s potential to herald an era of lab-created “content” while possibly rendering many filmmakers jobless.

Ancestra takes inspiration from McNitt’s own complex birth story, focusing on an expectant mother (played by Audrey Corsa) who hopes for her unborn child’s heart defect to magically heal. Despite using real actors and tangible sets, AI models like Google’s Gemini, Imagen, and Veo were instrumental in crafting the shots depicting the mother’s thoughts and the dangerous hole in her baby’s heart. Within the film, there are close-up shots reminiscent of Blonde, where the baby’s heartbeat merges into the soundtrack. The mother’s reflections on motherhood are illustrated with quick clips of other mothers with children, volcanic eruptions, and cosmic events following the Big Bang, which all carry a generative AI stock footage vibe.

The film is steeped in emotion, yet the portrayal of a mother’s love feels overused, especially when paired with a montage of AI-generated nature scenes. Visually, Ancestra aims to show that the flood of AI-generated videos taking over the internet is cause for enthusiasm. However, the film lacks a compelling narrative, making it a rather weak endorsement of Hollywood’s eagerness to delve into AI-generated content while it remains novel.

As McNitt smash cuts to quick shots of different kinds of animals nurturing their young and close-ups of holes being filled in by microscopic organisms, you can tell that those visuals account for a large chunk of the film’s AI underpinnings. They each feel like another example of text-to-video models’ ability to churn out uncanny-looking, decontextualized footage that would be difficult to incorporate into fully produced film. But in the behind-the-scenes making-of video that Google shared in its announcement last week, McNitt speaks at length about how, when faced with the difficult prospect of having to cast a real baby, it made much more sense to her to create a fake one with Google’s models.

“There’s just nothing like a human performance and the kind of emotion that an actor can evoke,” McNitt explains. “But when I wrote that there would be a newborn baby, I did not know the solution of how we would [shoot] that because you can’t get a baby to act.”

Filmmaking with infants poses all kinds of production challenges that simply aren’t an issue with CGI babies and doll props. But going the gen AI route also presented McNitt with the opportunity to make her film even more personal by using old photos of herself as a newborn to serve as the basis for the fake baby’s face.

With a bit of fine-tuning, Ancestra’s production team was able to combine shots of Corsa and the fake baby to create scenes in which they almost, but not quite, appear to be interacting as if both were real actors. If you look closely in wider shots, you can see that the mother’s hand seems to be hovering just above her child because the baby isn’t really there. But the scene moves by so quickly that it doesn’t immediately stand out, and it’s far less “AI-looking” than the film’s more fantastical shots meant to represent the hole in the baby’s heart being healed by the mother’s will.

Though McNitt notes how “hundreds of people” were involved in the process of creating Ancestra, one of the behind-the-scenes video’s biggest takeaways is how relatively small the project’s production team was compared to what you might see on a more traditional short film telling the same story. Hiring more artists to conceptualize and then craft Ancestra’s visuals would have undoubtedly made the film more expensive and time-consuming to finish. Especially for indie filmmakers and up-and-coming creatives who don’t have unlimited resources at their disposal, those are the sorts of challenges that can be exceedingly difficult to overcome.

A gif displaying side-by-side footage of videos that were fed into Google’s Veo generative AI model and videos the model produced

Image: Google

But Ancestra also feels like a case study in how generative AI stands to eliminate jobs that once would have gone to people. The argument is often that AI is a tool, and that jobs will shift rather than be replaced. Yet it’s hard to imagine studio executives genuinely believing in a future where today’s VFX specialists, concept artists, and storyboarders have transitioned into jobs as prompt writers who are compensated well enough to sustain their livelihoods. This was a huge part of what drove Hollywood’s film / TV actors and writers to strike in 2023. It’s also why video game performers have been on strike for the better part of the past year, and it feels irresponsible to dismiss these concerns as people simply being afraid of innovation or resistant to change.

In the making-of video, Aronofsky points out that cutting-edge technology has always played an integral role in the filmmaking business. You would be hard-pressed today to find a modern film or series that wasn’t produced with the use of powerful digital tools that didn’t exist a few decades ago. There are things about Ancestra’s use of generative AI that definitely make it seem like a demonstration of how Google’s models could, theoretically and with enough high-quality training data, become sophisticated enough to create footage that people would actually want to watch in a theater. But the way Aronofsky goes stony-faced and responds “not good” when one of Google’s DeepMind researchers explains that Veo can only generate eight-second-long clips says a lot about where generative AI is right now and Ancestra as a creative endeavor.

It feels like McNitt is telling on herself a bit when she talks about how the generative models’ output influenced the way she wrote Ancestra. She says “both things really informed each other,” but that sounds like a very positive way of spinning the fact that Veo’s technical limitations required her to write dialogue that could be matched to a series of clips vaguely tied to the concepts of motherhood and childbirth. This all makes it seem like, at times, McNitt’s core authorial intent had to be deprioritized in favor of working with whatever the AI models spat out. Had it been the other way around, Ancestra might have wound up telling a much more interesting story. But there’s very little about Ancestra’s narrative or, to be honest, its visuals that is so groundbreaking that it feels like an example of why Hollywood should be rushing to embrace this technology whole cloth.

Films produced with more generative AI might be cheaper and faster to make, but the technology as it exists now doesn’t really seem capable of producing art that would put butts in movie theaters or push people to sign up for another streaming service. And it’s important to bear in mind that, at the end of the day, Ancestra is really just an ad meant to drum up hype for Google, which is something none of us should be rushing to do.

Share this @internewscast.com