Share this @internewscast.com

Just over an hour before the Pentagon’s deadline, former President Donald Trump made a statement regarding Anthropic’s requirement to permit unrestricted military access to its AI technology, warning of potential repercussions.
WASHINGTON — In a significant escalation of tension, the Trump administration, on Friday, mandated that all U.S. agencies cease the use of Anthropic’s AI technology. This directive came alongside the imposition of severe penalties, highlighting a rare public dispute between the government and the firm over the issue of AI safety.
President Trump, along with Defense Secretary Pete Hegseth and other officials, took to social media platforms to criticize Anthropic. They accused the company of compromising national security by not meeting the deadline to allow the military open access to its AI technology. This followed CEO Dario Amodei’s refusal to comply, citing concerns about potential misuse of the technology that could breach the company’s safety protocols.
Trump expressed his disapproval online, stating, “We don’t need it, we don’t want it, and will not do business with them again!”
Hegseth further labeled the company as a “supply chain risk,” a term often reserved for foreign threats, potentially jeopardizing Anthropic’s vital business partnerships.
In response, Anthropic released a statement on Friday evening, announcing its intention to contest what it described as an unprecedented and legally questionable action, something it claims has never been publicly enforced against an American company before.
Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said. “We will challenge any supply chain risk designation in court.”
The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
OpenAI strikes deal with Pentagon hours after Anthropic was punished
Hours after its competitor was punished, OpenAI CEO Sam Altman announced that his company struck a deal with the Pentagon to supply its AI to classified military networks, potentially filling a gap created by Anthropic’s ouster.
But Altman said in the Friday night social media post that the same red lines that were the sticking point in Anthropic’s dispute with the Pentagon are now enshrined in OpenAI’s new partnership.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, adding that the Defense Department “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman also said he hopes the Pentagon will “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements.”
Trump and others lash out at Anthropic
Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.
After months of private talks exploded into public debate this week, Anthropic said Thursday that the government’s new contract language would allow “safeguards to be disregarded at will.” Amodei said his company “cannot in good conscience accede” to the demands.
Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.
Top Pentagon spokesman Sean Parnell said Thursday that Anthropic’s unwillingness to go along with the military’s demands was “jeopardizing critical military operations and potentially putting our warfighters at risk.” Hegseth said Friday that the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Trump’s social media post also mandated the company “better get their act together, and be helpful” during a six-month phase-out period or there would be “major civil and criminal consequences to follow.”
However, Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests.
Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Dispute shakes up Silicon Valley
The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.
The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to Google. that has a still-evolving contract to supply their AI tools to the military.
Musk sided with Trump’s administration, saying on his social media platform X that “Anthropic hates Western Civilization.”
Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable.” He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
Anthropic is “not trying to play cute here,” he wrote Thursday on LinkedIn. “You won’t find a system with wider & deeper reach across the military.”
O’Brien reported from Providence, Rhode Island.
Copyright 2025 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.