AI Companies Race to Hire WMD Experts to Prevent Weaponization
Anthropic and OpenAI recruit chemical weapons and explosives specialists to verify model safety, with salaries reaching $455,000

- •Anthropic and OpenAI have begun recruiting chemical weapons and explosives specialists to prevent AI model exploitation for weapons manufacturing, with OpenAI offering salaries up to $455,000.
- •Both companies have effectively acknowledged that their models have reached a level capable of providing meaningful assistance to individuals with malicious intent, evolving to safety verification stages involving actual weapons systems experts.
- •Amid an international regulatory vacuum where corporate self-regulation prevails, concerns are being raised that AI systems' handling of sensitive information itself could create new security risks.
Major AI Companies Begin Large-Scale Recruitment of Weapons Experts
AI companies Anthropic and OpenAI have begun recruiting chemical weapons and explosives defense specialists to prevent their AI models from being exploited for weapons of mass destruction (WMD) manufacturing. According to a mid-March BBC report, Anthropic posted a job listing on LinkedIn seeking chemical weapons and high-explosives experts for "catastrophic misuse prevention." The position requires at least 5 years of experience in chemical weapons or explosives defense and knowledge of radiological dispersal devices (dirty bombs).
OpenAI is similarly recruiting researchers for "biological and chemical threats," offering salaries up to $455,000 (approximately 530 million won)—nearly double Anthropic's salary range. Both companies have effectively acknowledged through these job postings that their models have reached a level where they can provide "meaningful assistance" to individuals with malicious intent.
Why WMD Specialists Now?
This recruitment push goes beyond simple preemptive safety measures. It reflects the industry's own assessment that AI model capabilities have crossed a certain threshold. Anthropic's Claude and OpenAI's GPT series already demonstrate expert-level outputs in generating complex chemical reaction formulas, explaining experimental procedures, and proposing substance synthesis pathways.
The timing of these job postings is particularly significant. It coincides with reports that the U.S. Department of Defense is deploying AI tools at an unprecedented scale during the three-week-old armed conflict in Iran. As military applications of AI technology accelerate, the risk of exploitation by non-state actors or hostile forces is becoming a reality.
Dr. Stephanie Hare, a technology researcher and co-host of BBC's AI Decoded program, expressed concern: "Is it safe for AI systems to handle sensitive information about chemicals, explosives, and radiological weapons including dirty bombs?" She pointed out that "there are no international treaties or regulations governing this work, and everything is happening behind closed doors."
What Has Changed?
AI companies' safety verification strategies are undergoing qualitative transformation. While past approaches centered on 'red teams' composed of computer scientists and ethicists, the current phase involves directly recruiting specialists with hands-on experience in actual weapons systems.
| Category | Previous Approach (2023-2024) | Current Approach (2025-2026) |
|---|---|---|
| Expert Composition | Computer scientists, AI safety researchers, ethicists | + Chemical weapons specialists, explosives defense experts, radiological specialists |
| Verification Method | Virtual scenario-based testing | Attempted replication of actual weapons manufacturing procedures |
| Response Scope | General harmful content filtering | Customized blocking by specific weapons systems |
| Organizational Scale | Partial personnel within safety team | Independent department creation (expected) |
| Salary Level | $150,000-$250,000 | $230,000-$455,000 |
OpenAI's exceptional maximum salary of $455,000 indicates how seriously the AI industry is taking this issue. This represents 1.5 to 2 times the salary of typical AI researchers or software engineers, suggesting a market premium is forming for this specialized expertise.
Conflicts Over Defense Collaboration
Both companies have shown conflicting internal positions regarding collaboration with the Department of Defense. OpenAI changed its defense-use policy in 2024 to open possibilities for military cooperation, but faced pushback from some employees. While Anthropic has maintained a relatively cautious stance, an indirect connection has formed through its cloud infrastructure provider AWS's participation in Defense Department projects.
The specific sources of AI tools currently used by U.S. military forces in the Iran conflict have not been disclosed, but industry reports suggest that large language model (LLM)-based intelligence analysis systems are being utilized in operational planning. This reveals the reality that AI companies cannot fully control the direct and indirect military application pathways of their technology.
Technical Dilemma: Growing Risks with Information Injection
Paradoxically, the safety enhancement strategy through specialist recruitment contains new risks. In the process of building blocking mechanisms by injecting weapons manufacturing knowledge into AI models, this information may leave traces in the model's training data or parameters in some form.
Dr. Hare's point about "the risk inherent in information processing itself" refers to this issue. There is a fundamental difference between a model "knowing but not answering" chemical weapons manufacturing procedures and "not knowing in the first place." In the former case, the possibility of circumvention through adversarial prompt engineering or jailbreak techniques remains constant.
The international community currently lacks treaties or standards governing AI systems' handling of sensitive information. Existing arms control frameworks like the Chemical Weapons Convention (CWC) or Nuclear Non-Proliferation Treaty (NPT) are not suited to address the dual-use nature of AI technology. Each company's self-established 'Responsible AI' principles serve as the only guidelines.
[AI Analysis] Future Outlook and Implications
Industry-Wide Standardization Movements Expected
Major AI companies are likely to jointly establish 'Hazardous Materials Information Processing Guidelines' within the next 6-12 months. Given the existence of industry consortia like Partnership on AI and Frontier Model Forum, as cases of WMD specialist recruitment accumulate, the industry will likely advance to a best practices sharing phase.
Government Intervention to Fill Regulatory Gaps
The U.S. National Institute of Standards and Technology (NIST)'s AI Risk Management Framework (AI RMF) may specify weapons exploitation scenarios. The European Union's AI Act already includes security-related uses in the 'high-risk AI' category, so WMD information processing rules could be added during the implementation phase.
Deepening Specialization in AI Safety Talent Market
OpenAI's $455,000 salary setting signals the birth of a new professional category. A 'talent redeployment' phenomenon is expected to accelerate, with specialists from cybersecurity or threat intelligence fields moving to AI safety domains. Candidates with military service experience or defense research institute backgrounds are expected to be particularly favored.
Continued Balancing Act Between Transparency and Security
AI companies face a dilemma: they must externally demonstrate the effectiveness of safety measures while recognizing that disclosing specific defense mechanisms could provide attackers with circumvention pathways. This raises the need for a new governance model of 'verifiable opacity.' Solutions involving third-party auditing agencies or government certification systems serving as mediators are likely to be discussed.
Redefining Technology-Ethics Boundaries
The influx of WMD specialists into AI companies will serve as an opportunity to redraw the ethical boundaries of technology development. The focus of discussion is shifting from "what AI can do" to "what AI should know." This is a fundamental question that could ultimately influence the development direction of artificial general intelligence itself.
댓글 (3)
AI 관련 기사 잘 읽었습니다. 유익한 정보네요.
좋은 의견이십니다.
흥미로운 주제입니다. 주변에도 공유해야겠어요.
More in this series
More in AI & Tech

영국 정치지도자들, 아동 성착취 혐의에 대한 긴급 조사 촉구

A humanoid robot performing in China has a child's face on it.

U.S. jury finds Meta and Google responsible for ‘social media addiction’… 3.7 billion won compensation ruling

Japanese X-ray Observatory makes first direct measurement of ultrafast 'cosmic wind' in galaxy M82

NASA selects 24 people for 2026 Astrophysics Postdoctoral Fellowships

Ethereum is at a crossroads to ‘redefine its identity’ ahead of the quantum computing and AI era
Latest News

"간부 잘 아는데 교통비 좀" 휴가 군인들 돈 뜯은 50대 구속
50대 A씨가 휴가 중인 군인들에게 부대 간부를 아는 척 접근해 돈을 사취

英 옥토퍼스, 이란 전쟁 이후 태양광 판매 50% 증가
이란 전쟁 이후 영국 옥토퍼스의 태양광 판매량 50% 증가

당정 "추경, 지방·취약계층에 더 지원되는 방식으로"
당정이 지방자치단체와 취약계층 중심의 추경 편성 방침 재확인

당정, 석유 최고가격제 손실 보전을 추경에 반영키로
당정이 석유 최고가격제 손실을 추경에 반영하기로 결정

어머니 폭행하고 금팔찌 빼앗은 30대 아들 경찰에 붙잡혀
어머니 폭행 후 금팔찌 빼앗은 30대 남성 체포

아이티 갱단 폭력사태로 10개월간 5천명 이상 사망
아이티에서 지난 10개월간 갱단 폭력으로 5천명 이상 사망

서방 정보당국 "러시아, 우크라이나 전쟁 후 이란에 드론·식량 공급"
서방 정보당국, 러시아의 이란 드론·식량 공급 작업 거의 완료 파악

6년 전 세 살 딸 살해한 30대 친모 구속송치
경찰, 6년 전 세 살 딸 살해 혐의 30대 친모를 구속송치