AI’s elusive exam: The quest to measure human-level intelligence in machines

Artificial intelligence has dazzled with feats from mastering chess to drafting legal briefs, yet one puzzle remains unsolved: how to tell if it truly thinks like us. As large language models consume more of the world’s knowledge, their ability to ace traditional tests might be less a sign of brilliance than a reflection of perfect recall. The challenge is not just inventing harder exams—it’s redefining what “intelligence” means in an era when machines may soon read everything we’ve ever written.

The race to design the ultimate test

Two of San Francisco’s leading AI players—Scale AI and the Center for AI Safety—have thrown down a high-stakes challenge. Their initiative, Humanity’s Last Exam, invites the public to craft questions capable of probing the limits of large language models like Google Gemini and OpenAI’s o1. The reward is enticing: US$5,000 for each of the 50 best questions selected, with the promise of contributing to what could become the definitive benchmark for “expert-level” AI systems.

Colourful abstract image

The push for new tests arises because current benchmarks may be misleading. Many language models now ace exams in mathematics, law, and reasoning, yet much of this success could stem from having already “seen” the answers during training. As these models are fed ever-expanding datasets—approaching, by some estimates, the sum total of human-written text by 2028—traditional assessments risk becoming obsolete. In this landscape, crafting questions that a model cannot anticipate becomes a formidable challenge.

The problem of pre-learned intelligence

In AI, the leap from conventional programming to machine learning was driven by data—not just vast in scale, but carefully structured for both training and testing. Developers rely on “test datasets” that, in theory, are untouched by the model during learning. But when a system has effectively consumed the internet, finding genuinely novel material is no simple feat.

Adding complexity is the looming threat of “model collapse,” where AI-generated content begins to dominate the web. If this synthetic material cycles back into training datasets, the quality of future models could deteriorate. To combat this, developers are turning to real-world interactions, harvesting fresh human input from tools like smart glasses, autonomous vehicles, and other sensor-rich devices. The hope is to keep AI’s understanding grounded in genuine, lived experience rather than an echo chamber of its own creations.

Why narrow success isn’t true intelligence

One of AI’s most dazzling feats—the chess dominance of Stockfish over Magnus Carlsen—illustrates a core problem in measuring machine intelligence. Stockfish is unmatched in its domain, yet utterly incapable of holding a conversation or recognising a human face. Its brilliance is narrow, not general. This mirrors a limitation in many AI benchmarks: they measure discrete skills without capturing adaptability or creativity.

To address this, Google engineer François Chollet introduced the Abstraction and Reasoning Corpus (ARC) in 2019. The ARC tests strip away massive datasets and instead present puzzles requiring the model to infer rules from minimal examples. Humans routinely score over 90% on ARC challenges. The best AI systems—such as GPT-4o or Anthropic’s Sonnet 3.5—still fall short, with scores ranging from 21% to 50%, often relying on brute-force solution generation rather than elegant reasoning.

The road toward measuring superintelligence

ARC may be the most credible benchmark yet for testing general reasoning, but it is not the endgame. Initiatives like Humanity’s Last Exam suggest that the community is still searching for diverse, robust ways to assess intelligence without giving away the answers. In some cases, the most incisive questions may never be made public, ensuring that future AIs cannot “study” for them in advance.

This secrecy hints at the deeper stakes: the moment we can reliably detect human-level reasoning, we also edge closer to the harder question—how to recognise, and perhaps contain, superintelligence. Testing for such an entity will require more than clever puzzles. It will demand an understanding of intelligence that transcends human experience, forcing us to define capabilities we have never encountered before. Until then, the exam papers remain unwritten, and the race to craft them continues.

Explore more

spot_img

Bad Bunny Takes Down Super Bowl Halftime Backlash in Fiery ‘SNL’...

In a bold and unmissable start to the new season of Saturday Night Live (SNL), musician Bad Bunny used his hosting monologue to directly...

Ronaldo and Rodríguez Confirm Engagement in Emotional Social Media Post

The world of football and celebrity has a new piece of exciting news as soccer legend Cristiano Ronaldo and his long-time partner, Georgina Rodríguez,...

Ugly Scenes Taint Birmingham City’s Return to the Championship After Fan-Player...

The celebratory return of Birmingham City to the EFL Championship, English soccer's second tier, was marred by ugly scenes following a late-game altercation between...

The Agony of a Promise: Why a Manchester United Fan is...

For any fan enduring the painful slump of Manchester United, the impulse might be to tear their hair out—but one supporter has done the...

From Agony to Amity: Mariona Caldentey on Facing Her Euro Final...

The emotional whiplash of elite club football means players must often reconcile the deepest highs and lows of the international game with the daily...

Spitting Incident Lands Luis Suárez with Three-Match MLS Ban

Inter Miami star forward Luis Suárez has been hit with an additional three-match suspension by Major League Soccer (MLS) for a spitting incident that...

Ticket to History: Your Essential Guide to Buying 2026 FIFA World...

With the 2026 FIFA World Cup—set to be the largest edition ever with 48 teams—fast approaching, the race to secure a seat at the...

Scandal at Stamford Bridge: Chelsea Hit with 74 Charges for Breaches...

The scrutiny of Chelsea FC’s past has intensified dramatically after the club was formally charged with an astonishing 74 breaches of English soccer’s rules....