AI’s elusive exam: The quest to measure human-level intelligence in machines

Artificial intelligence has dazzled with feats from mastering chess to drafting legal briefs, yet one puzzle remains unsolved: how to tell if it truly thinks like us. As large language models consume more of the world’s knowledge, their ability to ace traditional tests might be less a sign of brilliance than a reflection of perfect recall. The challenge is not just inventing harder exams—it’s redefining what “intelligence” means in an era when machines may soon read everything we’ve ever written.

The race to design the ultimate test

Two of San Francisco’s leading AI players—Scale AI and the Center for AI Safety—have thrown down a high-stakes challenge. Their initiative, Humanity’s Last Exam, invites the public to craft questions capable of probing the limits of large language models like Google Gemini and OpenAI’s o1. The reward is enticing: US$5,000 for each of the 50 best questions selected, with the promise of contributing to what could become the definitive benchmark for “expert-level” AI systems.

Colourful abstract image

The push for new tests arises because current benchmarks may be misleading. Many language models now ace exams in mathematics, law, and reasoning, yet much of this success could stem from having already “seen” the answers during training. As these models are fed ever-expanding datasets—approaching, by some estimates, the sum total of human-written text by 2028—traditional assessments risk becoming obsolete. In this landscape, crafting questions that a model cannot anticipate becomes a formidable challenge.

The problem of pre-learned intelligence

In AI, the leap from conventional programming to machine learning was driven by data—not just vast in scale, but carefully structured for both training and testing. Developers rely on “test datasets” that, in theory, are untouched by the model during learning. But when a system has effectively consumed the internet, finding genuinely novel material is no simple feat.

Adding complexity is the looming threat of “model collapse,” where AI-generated content begins to dominate the web. If this synthetic material cycles back into training datasets, the quality of future models could deteriorate. To combat this, developers are turning to real-world interactions, harvesting fresh human input from tools like smart glasses, autonomous vehicles, and other sensor-rich devices. The hope is to keep AI’s understanding grounded in genuine, lived experience rather than an echo chamber of its own creations.

Why narrow success isn’t true intelligence

One of AI’s most dazzling feats—the chess dominance of Stockfish over Magnus Carlsen—illustrates a core problem in measuring machine intelligence. Stockfish is unmatched in its domain, yet utterly incapable of holding a conversation or recognising a human face. Its brilliance is narrow, not general. This mirrors a limitation in many AI benchmarks: they measure discrete skills without capturing adaptability or creativity.

To address this, Google engineer François Chollet introduced the Abstraction and Reasoning Corpus (ARC) in 2019. The ARC tests strip away massive datasets and instead present puzzles requiring the model to infer rules from minimal examples. Humans routinely score over 90% on ARC challenges. The best AI systems—such as GPT-4o or Anthropic’s Sonnet 3.5—still fall short, with scores ranging from 21% to 50%, often relying on brute-force solution generation rather than elegant reasoning.

The road toward measuring superintelligence

ARC may be the most credible benchmark yet for testing general reasoning, but it is not the endgame. Initiatives like Humanity’s Last Exam suggest that the community is still searching for diverse, robust ways to assess intelligence without giving away the answers. In some cases, the most incisive questions may never be made public, ensuring that future AIs cannot “study” for them in advance.

This secrecy hints at the deeper stakes: the moment we can reliably detect human-level reasoning, we also edge closer to the harder question—how to recognise, and perhaps contain, superintelligence. Testing for such an entity will require more than clever puzzles. It will demand an understanding of intelligence that transcends human experience, forcing us to define capabilities we have never encountered before. Until then, the exam papers remain unwritten, and the race to craft them continues.

Explore more

spot_img

BADBISS quy tụ dàn mẫu đa quốc gia trình diễn tại...

Mang theo hơi thở của mỹ thuật thời Lý đến với "thánh đường" thời trang DDP Dongdaemun Design Plaza, thương hiệu BADBISS chính thức...

Mẫu nhí Nhã Hân góp mặt trong bộ sưu tập “Vườn...

Sàn diễn Dongdaemun Design Plaza tại Hàn Quốc vào tháng 3 tới sẽ đón nhận sự góp mặt của nhiều tài năng nhí châu...

Người mẫu Bảo Châu xác nhận trình diễn cho thương hiệu...

Từng đạt giải cao nhất tại Junior Model International 2017 ở Ấn Độ, Đinh Ngọc Bảo Châu tiếp tục khẳng định bản lĩnh khi...

Quán quân “Tinh hoa Nhí Việt Nam” Chiêu Thư sẵn sàng...

Tiếp nối thành công rực rỡ tại Đài Bắc, mẫu nhí Hán Yu Triệu Vy (nghệ danh Chiêu Thư) chính thức trở thành gương...

Shin Seo Young góp mặt trong BST “Vườn địa đàng” của...

Tuần lễ thời trang Asia Open Runway Seoul The 16th LBMA 2026 chính thức diễn ra từ ngày 6 đến 8/3 tại Dongdaemun Design...

Ji Eun Yul: Mẫu nhí xứ Hàn góp mặt tại sàn...

Trong không gian của tuần lễ Asia Kids Open Runway 2026 sắp diễn ra tại Seoul, mẫu nhí Ji Eun Yul sẽ mang đến...

Mẫu nhí Koka Bình Nguyên sải bước mở màn tại Taipei...

Đảm nhận vị trí First Face mở màn cho bộ sưu tập "Việt Nam, viết tiếp câu chuyện hòa bình" của thương hiệu Đắc...

“Búp bê màn ảnh” Cherry An Nhiên gây ấn tượng mạnh...

Ghi dấu ấn qua hàng loạt vai diễn nhí trong các bộ phim giờ vàng của VFC, Cherry An Nhiên vừa có màn tái...