Share this @internewscast.com
Happy Ceasefire Day and welcome to Regulator, a newsletter curated for Verge subscribers, focusing on Big Tech’s tumultuous interactions with the political landscape. If you haven’t subscribed yet, you can do so here. But, make sure to join before there’s any hint of Donald Trump reigniting his previous threats towards Iran, possibly triggering World War III.
After being sidelined last week by a formidable cold and the onset of pollen season, I’m back. (Washington, D.C. boasts 21% of its area as public green space, consistently earning it the title of America’s best city park system. Regrettably, my allergies flare up with every tree and blade of grass.) If you have any tips or insights about upcoming events that I might have missed, feel free to send them to tina.nguyen+tips@theverge.com.
OpenAI released a 13-page policy document on Monday, exploring the potential effects of artificial intelligence on the American workforce. The company suggested a remedy: imposing higher capital gains taxes on businesses that replace workers with AI, using the revenue to expand the public safety net. Their proposals included a public wealth fund, a four-day workweek supported by “efficiency dividends,” and government initiatives to transition workers into “human-centered” roles, all funded by the prosperity AI is expected to generate.
However, the timing coincided with The New Yorker‘s substantial 17,000-word piece by Ronan Farrow and Andrew Marantz, meticulously detailing Sam Altman’s history of deception, targeting everyone from Silicon Valley investors to lawmakers regulating AI. The article painted a persistent picture of Altman and OpenAI: advocates of idealistic principles, yet willing to discard them for financial or political benefit.
Despite this, several experts I consulted agreed that OpenAI’s paper was a positive contribution to the discourse on AI governance, introducing fresh ideas into the political arena regarding this evolving technology. Yet, critics of OpenAI argued that without the company’s policy and political actions aligning with these promises, the document might be dismissed as inconsequential.
“I suspect there are team members who genuinely care about these issues, who have poured considerable thought into this document and are rightfully proud of their work,” stated Malo Bourgon, CEO of the Machine Intelligence Research Institute (MIRI). “However, there’s still the question of whether these individuals will find themselves, like many before them at OpenAI, disillusioned when they realize the company’s values diverge from their own, leading them to eventually leave.”
“My guess is that there are people on the team who care about the stuff, who’ve thought really hard about this document and are proud of it, and did good work, even if it’s not addressing all of the questions that I wish it would address,” Malo Bourgon, the CEO of the Machine Intelligence Research Institute (MIRI), told me. “And there’s still the question of: Are those people gonna find themselves in the position that many previous people at OpenAI have found themselves in, where they thought the company had certain values or aligned with things they cared about, and then ended up finding out that wasn’t the case, becoming disenchanted and leaving?”
With OpenAI proposing policy, it’s worth looking back at its history with the government, which the New Yorker piece details in depth. Altman had been one of the first major CEOs to publicly advocate for federal oversight for AI, going so far as to propose a federal agency to oversee advanced models in 2023 — but privately he worked to suppress the laws containing his own safety proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to kill a 2023 AI safety bill that it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort to, as one such supporter put it to The New Yorker, “basically scare them into shutting up.” And though Altman had once worked extensively with the Biden administration to build AI safety standards, the moment that Donald Trump became president, Altman successfully persuaded him to kill the initiatives he’d once advocated for.
Nathan Calvin, the general counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, had received one of those subpoenas. “What I’ve seen from their policy and government affairs engagement has just been abysmal,” he told me. While he believed that the team who’d written the OpenAI proposal, primarily from the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will those folks remain engaged as we move from general policy principles towards the many other ways in which lobbying and government influence actually happens? Part of me is hopeful, but a lot of me is also quite skeptical about whether that will happen.” (OpenAI did not return a request for comment.)
A modest, absolutely not craven request:
Next week I plan on running an issue of Regulator cataloging the nerdiest events happening during Nerd Prom, aka the White House Correspondents’ Dinner party circuit. If you’re a tech founder, tech company, or someone that does something related to technology and you’re throwing an event during WHCD week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to shake up the normal social dynamics of the week — I’ve already caught wind of the Grindr party in Georgetown, and the Substack party, which famed looksmaxxer Clavicular is attending — and I’m so, so excited to pull together the most bonkers “SPOTTED” column that Washington’s ever experienced.
(Again, this is contingent upon whether we’re at war with Iran by the end of April, in which case, I imagine no one will be up for frivolity.)
Speaking of DC reporters, this is very true of all of us:
