Salem Radio Network News Saturday, February 28, 2026

Business

What to know about the clash between the Pentagon and Anthropic over military’s AI use

Carbonatix Pre-Player Loader

Audio By Carbonatix

WASHINGTON (AP) — A high-stakes dispute over military use of artificial intelligence erupted into public view this week as Defense Secretary Pete Hegseth brusquely terminated Anthropic’s work with the Pentagon and other government agencies, using a law designed to counter foreign supply chain threats to slap a scarlet letter on a U.S. company.

President Donald Trump and Hegseth accused rising AI star Anthropic of endangering national security after its CEO Dario Amodei refused to back down over concerns the company’s products could be used for mass surveillance or autonomous armed drones.

The San Francisco-based company has vowed to sue over Hegseth’s call to designate Anthropic a supply chain risk, an unprecedented move to apply a law intended to counter foreign threats to a U.S. company.

Anthropic said it would challenge what it called a legally unsound action “never before publicly applied to an American company.”

The looming legal battle could have huge implications on the balance of power in Big Tech during a critical juncture, as well as the rules governing military use of AI and other guardrails that are set up to prevent a technology from posing threats to human life.

The dustup already has resulted in a coup for ChatGPT maker OpenAI, which seized upon an opportunity to step into the void to make its technology available to the Pentagon after Anthropic objected to some of the Trump administration’s terms. It’s a turn of events likely to deepen the animosity between OpenAI CEO Sam Altman, who was temporarily ousted by his own board in late 2023 over questions about his trustworthiness, and Amodei, who left OpenAI in 2021 to launch Anthropic partly because of concerns about AI safety.

The Department of Defense’s move to label Anthropic a risk to the nation’s defense supply chain will end its up to $200 million contract with the AI company. It will also, according to the Pentagon, prohibit other defense contractors from doing business with Anthropic.

Trump wrote on Truth Social that most government agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.

Anthropic argues that Hegseth doesn’t have the legal authority to stop business relationships with other defense contractors. Any company that still holds a commercial contract with Anthropic can continue to use its products for non-defense projects, the company wrote in a statement.

The supply chain risk designation was created to give American military leaders a way to limit the Pentagon’s exposure to companies posing a potential security risk. The list has typically included firms with ties to adversaries, such as telecom giant Huawei, which has links to China, or cybersecurity specialist Kaspersky, which has links to Russia.

In the case of Anthropic, the designation serves as a warning to other AI and defense companies: Fail to meet our demands and you will be blacklisted.

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said on social media.

Trump’s six-month grace period for the Pentagon essentially opens a window for other companies to get the classified security clearances that are needed to work with the agency.

Anthropic says it has yet to be formally notified of Hegseth’s designation.

“When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court,” Amodei vowed during an interview with CBS News that will be aired Sunday morning.

For now, Anthropic is trying to convince the businesses and government agencies that the Trump administration’s supply chain risk designation only affects the usage of Claude, its AI chatbot and computer coding agent, for military contractors when they are using the tool on work for Department of Defense work.

“Your use for any other purpose is unaffected,” Anthropic wrote in its statement.

Making that distinction clear is crucial for Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to an announcement disclosing an investment that had valued the company at $380 billion.

Anthropic’s Claude technology has been gaining so much traction that it has emerged a viable replacement for a wide range of business software tools that is currently sold by major tech companies such as Salesforce and Workday. That potential has caused the stocks of companies that sell business software as a service to plunge this year.

But now that Anthropic has been labeled as a supply chain risk, there is some uncertainty about whether its customers will still feel comfortable using Claude for non-military work and risk drawing Trump’s ire. Any widespread reluctance to use Claude, despite all the inroads it has made during the past year, might slow the advance of AI in the U.S. at a time the country is racing to staying ahead of China in a technology that is expected to reshape the economy and society.

At the same time, Anthropic and Amodei may now have a bully pulpit to push their agenda for erecting sturdier guardrails around how AI operates.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said. “We will challenge any supply chain risk designation in court.”

In his interview with CBS, Amodei portrayed Anthropic’s dispute with the Trump administration as a stand for democracy.

“Disagreeing with the government is the most American thing in the world,” Amodei said. ”And we are patriots. In everything we have done here, we have stood up for the values of this country.”

Hours after its competitor was punished, OpenAI’s Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks. But Altman said that the same AI restrictions that were the sticking point in Anthropic’s dispute with the Pentagon are now enshrined in OpenAI’s new partnership.

In a memo obtained by The Associated Press, Altman told OpenAI employees: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

It is unclear why the Pentagon agreed to OpenAI’s red lines but not Anthropic’s. But in his memo, Altman wrote that the company believes it can “de-escalate things” by working with the Pentagon while still adhering to sound safety protections.

OpenAI’s deal with the Trump administration came on the same day it announced raising another $110 billion as part of an infusion that values the San Francisco-based company at $730 billion.

But OpenAI also may face a potential backlash if its work with the Pentagon is widely viewed by U.S. consumers who use ChatGPT as an instance of putting the pursuit of profit ahead of AI safety.

The Anthropic rift could also open new opportunities Musk, who co-founded OpenAI with Altman in 2015 before the two had a bitter falling out over safety concerns and financial issues. Musk has accused Altman of fraud and other deceitful behavior in a case scheduled to go to trial in late April.

Musk now oversees the AI chatbot, Grok, which the Pentagon also plans to give access to classified military networks despite its safety and reliability on top of government investigations into its creation of sexualized deepfake images. Musk has already been cheering on the Trump administration in its spat with Amodei, saying on his social media platform X that “Anthropic hates Western Civilization.”

Google, which has developed a suite of widely used AI tools on its Gemini technology, also could be in the running for more business from the U.S. military, although an outspoken flank of its workforce have been imploring executives to avoid doing deals that would violate the company’s former motto, “Don’t be evil.” Google’s executives so far haven’t publicly discussed Anthropic’s falling out with the Trump administration.

____

Liedtke reported from San Ramon, California.

_

Previous
Next
The Media Line News
X CLOSE