← NewsAll
Artificial intelligence is too important to be policed only by AI
Summary
The article reports that OpenAI uses large teams of human reviewers and logged over 75,000 reports to the National Center for Missing & Exploited Children in the first half of 2025, while Anthropic relies on a 'Constitutional AI' system and recorded about 5,005 NCMEC reports in the same period.
Content
Debate continues over who should police artificial intelligence as the technology becomes more widespread. The article notes that creators of AI generally accept the need for human oversight to build public trust. It cites Nvidia CEO Jensen Huang saying AI is processing data and can be turned on and off by people. The piece contrasts OpenAI's growing workforce of human reviewers with Anthropic's "Constitutional AI" approach.
Key facts:
- OpenAI employs a growing workforce of human reviewers, and its systems generated over 75,000 reports to the National Center for Missing & Exploited Children (NCMEC) in the first half of 2025.
- Anthropic uses a "Constitutional AI" method, training machines with an internal "constitution" written by company employees to guide behavior.
- Anthropic recorded about 5,005 NCMEC reports in the same year and was among 17 companies that NCMEC told had too few reports for the agency to act on.
- The article presents the view that AI amplifies human work by processing large amounts of data while humans review and act on important outputs.
Summary:
The article suggests that differences between human-led moderation and automated "constitutional" policing affect both the effectiveness of reporting and public confidence in AI systems. The longer-term balance between human oversight and machine-based moderation is undetermined at this time.
