Share this @internewscast.com
![]()
WASHINGTON – On Wednesday, Anthropic presented an argument to an appeals court, asserting that their AI tool, Claude, remains unalterable once integrated into classified Pentagon military systems. This claim aims to counter the Trump administration’s portrayal of the burgeoning tech company as a potential supply chain threat.
Within a comprehensive 96-page submission to the U.S. Court of Appeals in Washington D.C., Anthropic’s legal team outlined their stance, part of a legal battle stemming from a contractual disagreement over AI’s role in autonomous weaponry and possible surveillance measures within the U.S.
Anthropic, headquartered in San Francisco, argues that the Pentagon is unlawfully penalizing it by labeling the company as a security risk, a measure designed to thwart foreign threats to national security systems.
Earlier this month, the appeals court turned down Anthropic’s bid for an injunction that would temporarily halt the Pentagon’s actions while the court continues its examination of the case.
The recent filing by Anthropic seeks to address specific questions posed by the court in preparation for oral arguments set for May 19. The Trump administration will have an opportunity to submit its counterarguments before the scheduled hearing.
Anthropic’s recent challenge in Washington follows its success in a similar case in a San Francisco federal court, which led to the removal of the adverse labels by the Trump administration, according to court records.
But the lack of a similar order in the parallel case in Washington continues to cast a cloud over Anthropic, whose AI tools have turned it into a rising tech star along with rival OpenAI. After the Pentagon canceled a $200 million contract with Anthropic in the wake of their disagreement, OpenAI struck a deal to provide its technology to the U.S. military.
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.