A jury said Instagram and YouTube are defective — now what?
Share this @internewscast.com

Is social media not only detrimental but also potentially unlawful? Should tech giants be held accountable for this? According to verdicts from two US juries and a wealth of external opinions, the response seems to be affirmative.

This week, juries in New Mexico and Los Angeles found Meta responsible for causing harm to minors, resulting in substantial financial penalties. YouTube was also held accountable in Los Angeles, and both companies plan to contest these outcomes. While such decisions might seem unexpected given that Meta and Google typically benefit from protections under Section 230 and the First Amendment, it also feels like a foreseeable development. In 2026, the internet is dominated by a few commercial platforms that many criticize, and the harm linked to them is often evident. However, it remains uncertain what impact these rulings will have and what unintended consequences might arise.

If these rulings withstand the appeals process—which is not guaranteed—the immediate consequence would be significant monetary penalties. Depending on the results of additional “bellwether” cases in Los Angeles, there could be a broader settlement in the future. Even at this stage, it is a triumph for the legal argument that social media should be treated as defective products, a tactic aimed at bypassing Section 230’s protections but one that has frequently stumbled in courts. “The California case marks the first occasion where social media has faced the scrutiny and decision of a jury for specific personal injuries,” noted attorney Carrie Goldberg, a pioneer in social media liability cases, in an interview with The Verge. “This signifies the beginning of a new chapter.”

For many advocates, the ultimate aim is to demonstrate that legal actions will continue unless companies alter their business tactics. Which tactics need change? In New Mexico, a jury was convinced that Meta had misled users regarding the safety of its platforms. In Los Angeles, plaintiffs argued that Instagram and YouTube were structured to encourage social media addiction that adversely affected a teenage user. Companies like Meta and Google might adjust certain features or exercise greater caution in their public communications. However, every case is unique, and there is no universal solution for what needs to be addressed.

Eric Goldman, a legal expert specializing in Section 230, perceives significant legal risks for social media platforms. “These verdicts indicate that juries are prepared to impose substantial liability on social media companies based on claims of addiction,” Goldman wrote following the ruling. In an email exchange with The Verge, he emphasized that the issue extends beyond just jury decisions. “Judges are certainly conscious of the debates surrounding social media,” Goldman remarked. In the Los Angeles trial and other forthcoming cases, “judges have not afforded much leniency to social media defendants, allowing these novel cases to proceed to trial.” He observes this situation as distinctly different from a decade ago.

Eric Goldman, a legal blogger and expert on Section 230, sees clear legal danger ahead for social media services. “These rulings indicate that juries are willing to impose major liability on social media providers based on claims of social media addiction,” Goldman wrote after the ruling. In an email to The Verge, he noted the issue was bigger than just juries. “Judges are certainly aware of the controversies around social media,” Goldman said. In the Los Angeles case and other upcoming bellwether trials, “the judges have not given social media defendants much benefit of the doubt, which is how the plaintiffs’ novel cases were able to reach trials in the first place.” It’s a situation, he says, that “does feel differently compared to a decade ago.”

Goldman pointed out that New York and California have also passed laws banning “addictive” social media feeds for teens — so even if an appeals court reverses the recent decisions, that won’t necessarily turn back the clock.

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change “toxic” features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and crude” content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users’ privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

“Judges have not given social media defendants much benefit of the doubt.”

Blake Reid, a professor at Colorado Law, is more circumspect. “It’s hard right now to forecast what’s going to happen,” Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for “cold, calculated” ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. “There are obviously harms here and it’s pretty important that the tort system clocked those harms” in the recent cases, he told The Verge. “It’s just that what comes in the wake of them is less clear to me.”

While Reid sees legal risks for smaller platforms with fewer resources in these decisions, he’s not convinced they’re more serious than the challenges new entrants already face in a hyper-consolidated online landscape built on massive amounts of data collection. “There are things that make it hard to do something really new in this space that are driven by the sort of marketplace and the surrounding policy,” he said.

Reid, Goldman, and Masnick all warn there’s a clear chance that the fallout could harm marginalized people who use social media to connect. “There will be even stronger pushes to restrict or ban children from social media,” Goldman told The Verge. “This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations.”

If platforms like Instagram are inherently damaging and directly comparable to gambling or cigarettes, comparisons frequently made by critics, being kicked off would be no great loss. But even research that suggests social media can be harmful for adolescents has associated moderate use with better well-being. Conversely, harmful online content like harassment and eating disorder communities still flourished before recommendation-driven, hyper-optimized modern social media; tinkering with specific algorithmic formulas could have a positive impact, but it’s possible it won’t provide a deep or lasting fix. The appeal of punishing Meta is obvious — what it will mean for everyone else is much less clear.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Share this @internewscast.com
You May Also Like

White House Launches New App, Encourages Users to Report Individuals to ICE

An innovative app from the White House is now accessible on both…

Snag the Insta360 Link 2C: A Stellar 4K Webcam Now 20% Off!

If you frequently find yourself presenting or engaging in video calls, you’re…