Filtrează articolele

AI

NSA reportedly uses Anthropic’s restricted AI model Mythos amid Pentagon tensions

NSA reportedly uses Anthropic’s restricted AI model Mythos amid Pentagon tensions
The National Security Agency (NSA) is reportedly using Mythos Preview, a highly advanced and restricted artificial intelligence model developed by Anthropic, despite ongoing tensions between the AI company and the U.S. Department of Defense. According to a report by Axios, the NSA has gained access to Mythos — a frontier model unveiled earlier this month by Anthropic and explicitly designed for cybersecurity applications — even though the company withheld it from public release due to concerns over its potential for offensive cyber operations. Anthropic stated that Mythos possesses capabilities so powerful that releasing it broadly could enable malicious actors to conduct sophisticated cyberattacks, prompting the firm to limit access to approximately 40 organizations, only a dozen of which have been publicly disclosed.

The NSA appears to be among the undisclosed recipients, with sources indicating that the agency is primarily employing Mythos to scan digital environments for exploitable vulnerabilities — a defensive cybersecurity use case that aligns with the model’s stated purpose. This development adds complexity to an already strained relationship between Anthropic and the U.S. military establishment. Just weeks prior, the Department of Defense, the NSA’s parent agency, labeled Anthropic a "supply chain risk" after the company refused to grant Pentagon officials unrestricted access to the full capabilities of its AI models, particularly Claude.

Anthropic’s refusal stemmed from ethical and safety concerns: the company has consistently opposed the use of its technology for mass domestic surveillance or the development of autonomous weapons systems, arguing that such applications pose significant risks to civil liberties and global stability. This stance led to a public dispute with the Pentagon, which has been pushing for broader integration of AI tools into defense and intelligence operations.

Despite this conflict, recent signals suggest a potential thaw in Anthropic’s relationship with the Trump administration. Last Friday, Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Secretary of the Treasury Scott Bessent — a meeting interpreted by observers as an effort to reestablish dialogue and possibly negotiate frameworks for responsible AI use in government contexts.

Adding further context, the UK’s AI Security Institute has also confirmed it has access to Mythos, underscoring growing international interest in the model’s defensive cybersecurity capabilities among trusted allied institutions.

The situation highlights a critical tension in the AI governance landscape: how to balance national security imperatives with the ethical responsibility to prevent the misuse of powerful dual-use technologies. While the NSA’s use of Mythos for vulnerability scanning may be defensible as a protective measure, the broader implications — including precedent-setting access controls, lack of transparency, and the potential for mission creep — remain unresolved.

As of now, neither the NSA nor Anthropic has commented publicly on the reports. TechCrunch reached out to both parties for confirmation but received no response.

This development coincides with the upcoming StrictlyVC event in San Francisco on April 30, 2026, where industry leaders and investors will gather to discuss the evolving intersection of technology, policy, and innovation — a conversation that now includes the urgent debate over who gets to control, and how to govern, the most powerful AI systems emerging today.

Acest site folosește cookie-uri pentru a-ți oferi o experiență de navigare cât mai plăcută. Continuarea navigării implică acceptarea acestora.