The article expresses strong skepticism regarding the OECD’s hasty decision to include an AI Literacy component in the 2029 Programme for International Student Assessment (PISA), which tests 15-year-olds globally. The authors argue that this rapid push for a global metric, planned to be implemented within a few years, risks obscuring critical, complex questions about AI’s role in society. Their main concerns are that the proposed framework is ill-defined, that the effort removes teachers as critical mediators, and that the entire exercise risks becoming a tool to promote and normalize the political economy of AI rather than truly assessing critical understanding.
AI Literacy as an Ill-Defined, Premature Concept
A primary critique is that the concept of “AI literacy” itself is preliminary and ill-defined in the draft PISA framework. The OECD’s plan outlines four competencies: engaging with AI, creating with AI, managing AI, and designing AI.
The authors agree with a growing research community that finds these concepts hasty and vague. By framing AI literacy as a distinct, universally measurable capability, the OECD signals an intent to impose its own understanding onto global education systems. The concern is that this “infrastructuring AI literacy” focuses on measurement and testing rather than on the profound, evolving, and sometimes existential ethical and social questions surrounding AI.
Obscuring the Political Economy of AI
The article argues that the rush to measure AI literacy distracts from the crucial underlying political economy that is driving the technology. By focusing narrowly on student competencies, PISA’s approach risks obscuring essential questions about the relationships between business markets, states, and the popularization of AI.
Testing AI literacy, in this view, becomes less about critical education and more about legitimizing and implementing the business models and technological vision favored by powerful tech companies and international organizations. The assessment, therefore, may end up serving as a global marketing tool that forces schools to accept and align with a pre-packaged, commercialized future.
Marginalizing the Role of the Teacher
A critical finding from the authors’ preliminary research into the draft AI Literacy Framework is the significant lack of discussion regarding the role of teachers and formal schooling. They found that in the framework document, “AI” is mentioned 442 times, and “learners/students” are referenced approximately 126 times, while “teachers” are mentioned only 10 times and “schools” nine times.
This imbalance suggests that teachers are being removed from any major, critical role in mediating these frameworks. When teachers are mentioned, they appear to be little more than a “prop” to the technology, rather than the essential, critical mediators needed to help students understand the complex, ethical, and societal implications of AI within a classroom context.