Judge sides with Anthropic to temporarily block the Pentagon’s ban
Share this @internewscast.com

After a prolonged impasse with the Pentagon, Anthropic has achieved a significant breakthrough. A judge has issued a preliminary injunction in favor of Anthropic, temporarily halting its government-imposed blacklist while the legal proceedings continue.

In her ruling, Judge Rita F. Lin from California’s northern district noted that the Department of War labeled Anthropic a supply chain risk due to its “adverse conduct through media channels.” The judge emphasized, “Penalizing Anthropic for drawing public attention to the government’s contract stance constitutes a classic case of unlawful First Amendment retaliation.” This order will be implemented in a week’s time.

The conclusion of this legal battle might still be weeks or even months away.

Danielle Cohen, a spokesperson for Anthropic, expressed gratitude in a statement on Thursday, “We appreciate the court’s swift action and are pleased with their agreement that Anthropic is likely to prevail. This legal action was crucial to safeguard Anthropic, our clients, and our collaborators. However, our primary focus remains on constructive engagement with the government to ensure that all Americans benefit from secure and dependable AI technology.”

During the Tuesday hearing, Judge Lin remarked on the broader implications of the case, saying, “This touches on a significant discussion. Anthropic argues that its AI solution, Claude, should not be used for autonomous lethal weapons or domestic surveillance. They contend that the government must agree to these terms if they wish to use the technology. Meanwhile, the Department of War maintains that military leaders should determine the safety of AI applications.”

Judge Lin further clarified, “It’s not my responsibility to determine the correctness of this debate. The Department of War has the authority to choose and purchase AI products. Anthropic and others concur that the Department is at liberty to discontinue using Claude and seek a more flexible AI provider.” She continued, “The central issue here is whether the government overstepped legal boundaries in its actions.”

It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decision-making process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.

With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.

It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.

Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.

During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”

On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.

In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”

Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.

During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”

“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.

In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Share this @internewscast.com
You May Also Like

Snag the Insta360 Link 2C: A Stellar 4K Webcam Now 20% Off!

If you frequently find yourself presenting or engaging in video calls, you’re…

Discover If These Clinically Tested Gummies Are the Secret to Better Digestion

Welcome to Optimizer, a weekly newsletter sent out every Friday by Verge…

White House Launches New App, Encourages Users to Report Individuals to ICE

An innovative app from the White House is now accessible on both…