Anthropic Appoints Manager to Mitigate AI Chemical and Explosive Threats
Loading more articles...
AI Safety Push: Anthropic Hires Manager for Chemical, Explosive Threat Risks
F
Firstpost•17-03-2026, 11:04
AI Safety Push: Anthropic Hires Manager for Chemical, Explosive Threat Risks
•Anthropic, a US-based AI firm, hired a Policy Manager to address chemical weapons and high-yield explosives risks in AI systems.
•The role involves shaping how AI handles sensitive information and evaluating AI model capabilities related to chemical weapons and energetic materials.
•Applicants need 5+ years experience in chemical weapons or explosives defense with deep expertise in related areas.
•Other firms like OpenAI have also recruited for similar "Responsible AI" roles to mitigate catastrophic risks from frontier AI models.
•Experts warn this approach could inadvertently provide AI tools with weapon-related information, despite Anthropic's co-founder Dario Amodei's stance against AI use in war. A BBC report indicates Anthropic's Claude is still deployed by the US in the US-Israel Iran war, despite the company's previous legal action against the US government regarding military use.