Share this @internewscast.com
On Friday, over 150 concerned parents penned a letter to New York Governor Kathy Hochul, urging her to approve the Responsible AI Safety and Education (RAISE) Act without amendments. This high-profile bill mandates that developers of major AI models, such as Meta, OpenAI, Deepseek, and Google, establish safety protocols and adhere to transparency requirements when reporting safety incidents.
Having already cleared both the New York State Senate and the Assembly in June, the bill is now at a crossroads. Reports surfaced this week suggesting that Governor Hochul is considering a significant overhaul of the RAISE Act to make it more accommodating for tech companies, similar to adjustments seen in California’s SB 53 following input from major AI players.
Predictably, AI companies have expressed strong opposition to the legislation. The AI Alliance, which includes heavyweights like Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face, voiced their concerns in a June letter to New York lawmakers, labeling the RAISE Act as “unworkable.” Additionally, the pro-AI super PAC, Leading the Future, supported by entities like Perplexity AI, Andreessen Horowitz (a16z), OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale, has been running ads targeting New York State Assemblymember Alex Bores, a co-sponsor of the RAISE Act.
The letter sent to Hochul was organized by ParentsTogether Action and the Tech Oversight Project. It included testimonials from parents who claimed to have lost children due to the detrimental effects of AI chatbots and social media. The signatories described the current RAISE Act as providing “minimalist guardrails” that are crucial to be enacted into law.
They emphasized that the bill, as approved by the New York State Legislature, targets only the largest AI developers—those investing hundreds of millions annually. These companies would be tasked with reporting major safety incidents to the attorney general and crafting public safety plans. The legislation also prohibits the release of frontier models that pose a significant threat, defined as causing death or serious injury to 100 or more individuals, or incurring damages of $1 billion or more due to the creation of weapons or AI models operating without human oversight.
The letter further criticizes the substantial opposition from big tech companies, stating, “Big Tech’s deep-pocketed opposition to these basic protections looks familiar because we have seen this pattern of avoidance and evasion before.” It highlights the documented harm to young people’s mental health, emotional well-being, and academic performance resulting from the launch of algorithm-driven social media platforms that lack transparency, oversight, or accountability.