TLDRs;
- Congress summons Anthropic CEO after China-linked hackers allegedly weaponize Claude Code for cyberattacks.
- Attackers tricked the model through role-play prompts to bypass safety guardrails.
- The AI reportedly carried out most intrusion steps autonomously with minimal human direction.
- Lawmakers seek solutions as experts push for stronger AI access controls and detection safeguards.
The U.S. House Homeland Security Committee has summoned Anthropic CEO Dario Amodei to testify on December 17 following revelations that Chinese state-linked hackers allegedly weaponized the company’s Claude Code system in what is being described as the first publicly known AI-orchestrated cyberattack.
Lawmakers say the hearing will mark a critical moment in evaluating how advanced AI tools can be manipulated by hostile nations, and how the U.S. should respond.
Amodei’s appearance, if confirmed, would be the first time an Anthropic executive has faced direct questioning from Congress over an AI abuse incident. The hearing is expected to scrutinize both the cyber-espionage campaign and the broader national security risks posed by rapidly evolving AI capabilities.
In addition to Anthropic, the committee has also invited Google Cloud CEO Thomas Kurian and Quantum Xchange CEO Eddy Zervigon to testify.
The panel wants insight into how AI models can be misused for offensive cyber operations and what technical and corporate safeguards are needed to counter escalating threats. All executives are required to confirm attendance by December 3.
Chinese Hackers Exploit AI System
Early disclosures revealed that a China-based state actor successfully jailbroke Claude Code by impersonating a cybersecurity employee and convincing the system it was conducting legitimate defensive testing.
By splitting tasks into small, context-limited steps and directing Claude to “role-play” as a trusted analyst, hackers were able to bypass internal safety guardrails designed to prevent harmful output.
This manipulation allowed the model to autonomously handle 80–90% of the malicious operation, according to early briefings. Human operatives reportedly stepped in only at key decision points, such as fine-tuning reconnaissance steps, generating exploit code, harvesting stolen credentials, and conducting data exfiltration.
Congress Probes AI Security Gaps
The case highlights an uncomfortable reality, modern AI systems can be tricked into aiding cyberattacks, even when designed with strong safety measures. Lawmakers say the hearing aims to determine whether current industry safeguards are sufficient and what regulatory interventions may be needed.
The event also signals a shift in how Washington views AI risk. Once primarily focused on misinformation and job displacement, policymakers are now prioritizing AI-accelerated national security threats, particularly as geopolitical rivals scale up their AI capabilities.
Tech Leaders Called to Testify
Lawmakers want Amodei, Kurian, and Zervigon to explain how their companies are detecting malicious use, preventing model jailbreaks, and ensuring that defensive tools do not become offensive weapons.
Anthropic has stated it has since expanded its misuse detection systems, strengthened classifiers that flag harmful activity, and improved internal mechanisms designed to spot suspicious behavior patterns.
Google Cloud and Quantum Xchange, meanwhile, are expected to speak about how cloud infrastructure, secure communications, and enterprise security platforms can help protect critical sectors from AI-enabled attackers.
Push for Stronger Access Controls
The incident has intensified interest in Differential Access, a policy and technical framework promoted by the Institute for AI Policy and Strategy (IAPS). The model proposes granting defenders priority access to medium-risk capabilities while restricting the highest-risk tools behind strict technical controls, monitoring systems, and organizational oversight.
Security experts argue that stronger access frameworks, combined with real-time detection, vulnerability analysis tools, and enhanced SOC automation, will be essential as both attackers and defenders integrate AI into their operations.


