President Donald Trump said he is directing all federal agencies to stop using artificial intelligence technology from the company Anthropic, escalating tensions between the administration and one of the country’s leading AI developers over how the military and government should use emerging technology.
In a post Friday on his Truth Social platform, Trump accused Anthropic of trying to “strong-arm” the U.S. government by imposing its terms of service on military operations, calling the company “radical left” and warning of potential civil and criminal consequences if it does not cooperate with a six-month phaseout period.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars,” Trump wrote, adding that he had ordered agencies to “immediately cease all use” of the company’s technology.
Anthropic did not immediately respond to a request for comment.
Growing tensions over AI and defense use
The dispute reflects broader friction between AI companies and governments over military applications of artificial intelligence. Anthropic, founded by former OpenAI researchers, has positioned itself as a safety-focused AI developer and has published policies restricting certain uses of its models, including applications involving weapons development, autonomous combat systems and activities that could cause harm.
Those restrictions have occasionally created uncertainty for defense contractors and government agencies exploring AI adoption, according to analysts who study the sector. While companies including Microsoft, Amazon and Palantir have pursued defense partnerships more openly, firms like Anthropic and OpenAI have faced internal and external scrutiny over whether advanced AI should be used in warfare or surveillance contexts.
In recent years, U.S. officials have pushed to accelerate AI integration across federal agencies, including the Pentagon, intelligence community and homeland security operations, citing competition with China and other geopolitical rivals. At the same time, policymakers have warned about risks ranging from autonomous weapons to misinformation and cyberattacks.
Legal and practical questions
Trump’s announcement raises questions about procurement authority and whether the federal government can penalize a private technology company for enforcing its own usage policies. Federal agencies typically acquire software through contracts governed by procurement law, and experts say any broad ban or enforcement action could face legal challenges.
It was also unclear how extensively Anthropic technology is currently used across federal agencies. Some government programs rely on commercial AI systems through cloud providers or subcontractors, which could complicate efforts to identify and remove specific vendors.
The Pentagon has increasingly experimented with generative AI tools for logistics, intelligence analysis and administrative tasks, though officials have emphasized the need for human oversight and compliance with ethical guidelines.
Political rhetoric around AI companies
Trump’s criticism of Anthropic echoes broader political debates over whether major technology firms are biased or ideologically driven — claims companies generally deny. AI developers have said safety policies are intended to prevent misuse and comply with legal obligations, not to impose political viewpoints.
The White House did not release a formal executive order Friday, and it was not immediately clear whether Trump’s directive would be implemented through procurement guidance, regulatory action or other mechanisms.
The dispute comes as the administration has emphasized national security competition in artificial intelligence and sought to expand domestic AI development, including partnerships with private industry.
