By David Jeans, Jeffrey Dastin and Deepa Seetharaman NEW YORK, Feb 27 (Reuters) – An explosive feud between the Pentagon and top artificial intelligence lab Anthropic is set to come to a head by 5:01 p.m. (2201 GMT) on Friday over concerns about how the military could use AI at war. The dispute, barreling toward […]
Science
Pentagon Anthropic feud has sales and AI warfare at stake as Friday deadline looms
Audio By Carbonatix
By David Jeans, Jeffrey Dastin and Deepa Seetharaman
NEW YORK, Feb 27 (Reuters) – An explosive feud between the Pentagon and top artificial intelligence lab Anthropic is set to come to a head by 5:01 p.m. (2201 GMT) on Friday over concerns about how the military could use AI at war.
The dispute, barreling toward a deadline set by the Pentagon for resolution, is widely seen as a referendum on how powerful AI could be deployed by the military and how its risks are managed.
The Pentagon wants any lawful use to be allowed and has threatened Anthropic’s business if the startup does not scrap additional guardrails.
“It’s a shot across the bow about the future of artificial intelligence and its use on the battlefield,” Chris Miller, the former acting secretary of defense, told Reuters. He added that the outcome will “be an acid test for those companies that claim to want to use AI humanely.”
The months-long spat has divided some industry leaders, military officials and lawmakers over whether AI should be wielded without constraints when its creator Anthropic said the technology was not yet reliable for fully autonomous weapons.
Democratic Senator Elissa Slotkin weighed in on Thursday: “The average person does not think we should allow weapons systems to get into war and kill people without a human being overseeing that in some way.”
Speaking at a confirmation hearing for two assistant defense secretary nominees, Slotkin added: “I certainly don’t think any American, Democrat or Republican, wants mass surveillance on the American people.”
The Pentagon, which the Trump administration renamed the Department of War, has pushed back on the dilemma as a false choice “peddled by leftists in the media.”
“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” Pentagon chief spokesperson Sean Parnell posted on X Thursday.
NEGOTIATIONS FALTER
The Pentagon has signed $200-million ceiling agreements with major AI labs in the past year, including Anthropic, OpenAI and Google. It is pushing companies to agree to scrap their usage policies in favor of abiding by an all-lawful use clause.
Anthropic, continuing these talks, has maintained red lines over the military’s use of its Claude AI models for autonomous weapons and domestic surveillance. Anthropic was first among these AI companies to work with classified information, through a supply deal via cloud provider Amazon.
Anthropic CEO Dario Amodei, famous for quitting OpenAI in 2020 over concerns about AI technology’s stewardship, has warned that AI has advanced faster than the law.
Powerful technology could hoover up disparate material to gather intelligence on unwitting civilians, he said in a Thursday blog post, a prospect that critics view as a legal loophole.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” but AI in narrow cases “can undermine, rather than defend, democratic values,” Amodei said.
Amodei met with Defense Secretary Pete Hegseth this week. Afterward, the Pentagon gestured toward compromise and sent the startup revised contract language.
But the two parties remained at an apparent impasse.
An Anthropic spokesperson said on Thursday, “The contract language we received overnight from the Department of War made virtually no progress” and would allow “safeguards to be disregarded at will.”
BUSINESS THREATS
Key business for Anthropic is at stake.
The Pentagon warned it would terminate its work with the startup and declare it a supply-chain risk if Anthropic did not accede to the department’s demand for all-lawful use of AI.
The designation, reserved typically for suppliers in adversary nations, means that defense contractors could be barred from deploying Anthropic’s AI during work for the Pentagon.
The setback comes as Anthropic races to win sales to businesses and government, with national security an area of focus.
The Pentagon has asked contractors including Lockheed Martin to give an appraisal of their reliance on Anthropic ahead of the risk designation, Reuters reported on Wednesday. The defense industrial base totaled around 60,000 contractors including major public companies as of 2021.
The Pentagon made a second threat, the legality of which some experts have questioned.
“If they don’t get on board, SecWar will ensure the Defense Production Act is invoked on Anthropic,” a senior Pentagon official told Reuters, “compelling them to be used by the Pentagon regardless of if they want to or not.”
(Reporting by David Jeans in New York and Jeffrey Dastin and Deepa Seetharaman in San Francisco; Editing by Kenneth Li)

