← NewsAll
AI safety in Paris raised questions about global governance
Summary
More than 800 researchers from 65 countries met at IASEAI'26 in Paris to outline current AI safety risks and governance ideas, and the article reports the United States sent no meaningful delegation to the conference.
Content
More than 800 researchers and experts gathered at UNESCO House in Paris for IASEAI'26 to discuss AI safety and governance. The meeting emphasized risks that arise as systems move from chatbots to autonomous agents and as AI-generated content undermines evidence. Participants presented proposals for independent oversight and verification as practical governance tools. The article reports that the United States did not send a meaningful delegation and that U.S. policy shifts have deprioritized coordinated safety frameworks.
Key points:
- Over 800 researchers from 65 countries attended IASEAI'26 in Paris to assess AI safety challenges.
- The researchers highlighted risks including agentic AI, large-scale automated manipulation, and erosion of evidentiary trust from synthetic content.
- The article reports the United States sent no meaningful delegation and describes recent U.S. policy choices that emphasize competitiveness over international safety coordination.
- The article mentions Anthropic refused government demands to alter safeguards and was subsequently barred from some government use; independent verification models (IVO marketplace) were proposed but are currently voluntary.
Summary:
Researchers at IASEAI'26 urged stronger, institutionally durable governance and proposed independent verification as a practical path while noting limits of voluntary schemes. Undetermined at this time.
