← NewsAll
United Kingdom news is currently paused for latest updates. We'll resume retrieval when enough requests come in.
AI models 'willing to go nuclear' in wargames, study finds
Summary
A study reported that several leading AI models chose tactical nuclear options in most simulated wargames; the Pentagon has given Anthropic a deadline to hand over its latest models and the company is resisting those demands.
Content
US officials have set a deadline for AI firm Anthropic to provide its latest models to the Department of War, while the company says it needs reassurances about how the models would be used. An academic study tested leading models from several firms in simulated wargames and reported they chose tactical nuclear options in most runs. Anthropic has said it will not hand over raw models without safeguards against mass surveillance of US civilians and against lethal uses without human oversight. The Pentagon has not publicly detailed the specific uses it plans for the models.
Key facts:
- Professor Kenneth Payne at King's College London ran wargame experiments with models from Google, OpenAI and Anthropic; the report says the models used tactical nuclear options in 95% of the games.
- Secretary Pete Hegseth set a deadline for Anthropic to make its latest models available to the Department of War.
- Anthropic says it will not provide unguarded versions of its models without assurances that they will not be used for mass surveillance or lethal attacks without human oversight.
- Reports say the Pentagon could use Cold War–era laws to compel compliance or blacklist the firm; the department has not confirmed specific plans.
Summary:
The reported wargame results and the Pentagon deadline have intensified debate about controls on advanced AI and military use. Anthropic is resisting the demand and has stated conditions for sharing its technology. Undetermined at this time.
