Shafika Isaac: At UNESCO, we’ve identified several high stakes myths that we need to dismantle if we want to protect the future of learning, particularly equitable and inclusive learning for all. So, one of the big myths is what we call the teacher replacement myth, a persistent myth that AI will render teachers obsolete by becoming a universal tutor and teaching companion and that we can in fact rely on AI to be responsive to the need.
UNESCO has found through its own research that the global education system needs 44 million teachers by 2030. This often an argument for us to invest more in AI technologies at the expense of investing in teachers.
We believe that this is a fundamental category error. AI can manage data transfer, but it cannot manage human development. Education is fundamentally a social, human and cultural experience and not a technical download.
So, the myth of the AI tutor risks reducing teachers to mere facilitators or data managers when their true role is really focused on the ethical, the social and emotional and decidedly human relationships between students and teachers that strike at the heart of the learning process.
So that’s the first of many myths. The other is what is referred to as the personalisation myth. We often hear that AI can personalise learning, but as we argue in our AI and the Futures of Education publication that we launched last year, what many call personalisation is actually standardised individualisation.
It is a student sitting alone with a screen following a path of an algorithm that has chosen what the student learns and directs the pathway of learning for that particular student. We have repeatedly argued that learning is less of an individualised endeavour and that it is a social endeavour that needs to focus on cultivating critical thinking, creativity, critical agency, stronger values-based ethical foundations, and an ability to relate to others and the environment, not just keeping learners in a feedback loop that fixes the so-called flaws, which is what much of the personalised learning tools have to offer. And the big discourse that is hyped up is around how AI’s personalised learning capability can address some of the fundamental learning challenges in our education systems.
There are a range of other myths. I think the one that I also want to highlight is that the dominant discussion on AI focuses on measuring success by how quickly a student can reach a correct answer. But in education, speed is often the enemy of depth. The myth is that frictionless learning is better.
In reality, students need to go through the cognitive struggle, the slow, the difficult process of critical inquiry. And we are seeing now that more and more AI tools is lending students more towards cognitive offloading, getting the AI to all the thinking and answers for them, when in reality, what is needed is for a human teacher to support and assist the student’s learning as part of that cognitive struggle learning process, which is where the AI is also quite limited. And that is a major myth that as UNESCO, we are consistent in busting.
So, the bottom line really is that we have to stop asking if AI can replace human functions and start asking deeper questions. Should AI be replacing human functions? Our goal isn’t to build smarter classrooms. It’s to design, develop and use technologies to build a more just and equitable and human-centred education system, which is also officially the role of UNESCO.
UN News: And what are some of the ways AI can improve learning outcomes and, as you said, the process itself? At the same time, what are the biggest risks?
Shafika Isaac: The question around the most promising ways that AI can improve learning outcomes is the question of the moment. But we have to be careful not to fall into what we often refer to in UNESCO as the efficiency trap when posing that question, because it raises questions about what we understand by learning outcomes that need to be improved. Usually when people talk about improved learning outcomes, they mean raising test scores through faster content mastery.
Some studies have actually shown improvements of up to 30 per cent when we apply these very narrow metrics of learning outcomes and improving test scores. But we believe in UNESCO that that is only half the story. We believe instead that the opportunity that AI presents lie not so much in improving narrowly defined learning outcomes, but that it lies in redefining the very nature of the learning process itself.
We see three key areas of promise in this respect. The first is that we need to use this as an opportunity in leveraging AI to move from acquiring knowledge to cultivating the agency of our students and our teachers. We are moving away from the industrial model of memorising facts and mastering content.
The real opportunity lies in using AI as what we say in education, a Socratic agent, a critical agent, one that can encourage students to struggle through the problems and to think critically rather than offering a shortcut to the answer or getting the AI to find the answer for them. But for this to happen, we need to cultivate the critical and ethical agency of students and we explain that in detail with suggestions and ideas around how to do so in the AI competency framework for students that UNESCO has launched in 2024 and that we since then have operationalised in a number of countries across the world. So that’s the first opportunity we think that AI presents.
The second is the linguistic and cognitive inclusion opportunity. We are exploring how locally designed large language models or small language models trained on local data where learners and teachers have control over their own data, that such systems can offer the opportunity for linguistic inclusion, especially in languages that are often marginalised. As UNESCO, we have often highlighted that there are something like 8400 different languages across the world and that most of these algorithms are designed in dominant languages.
And so here is an opportunity to design large language models in local languages, in indigenous languages to ensure linguistic inclusion and how AI can also provide targeted scaffolding for learners that are neurodiverse, who are often also left behind through these one-size-fits-all systems. And so that is a further opportunity. And then I think along with quite a number of others, the third I want to highlight is that we have seen some ideas and possibilities around supporting a zero dropout.
We know now in today’s world that there’s something like 270 odd million children who are out of school and that there are huge campaigns around preventing further dropout or push out, as some scholars call it. So, we see an opportunity for AI to act as a potential early warning system that can provide teachers some insight, provided that the AI does not reinforce biases based on class and race and gender and geography. And if it does not reinforce the biases, it can in fact help teachers identify patterns of children that are potentially at risk of dropping out of school.
And so it is about using this kind of data to keep the human in the loop and to ensure that we apply and use these algorithms and AI in ways that are not reinforcing inequalities and biases. But then if we do so, then they offer quite a number of opportunities under the right conditions.
The risks that we are concerned about is especially when we mistake efficiency and speed for excellence and good learning. So if we aren’t careful, we risk three major educational setbacks. The first being the cognitive offloading crisis.
We see more and more examples of children and learners using AI to do all the hard work of thinking. The cognitive struggle that is so integral to learning that I’ve said before. And we are at the risk of continuing that pattern that’s emerging at quite a rapid pace, and it’s quite widespread. We are effectively automating the fundamental human relationship with knowledge. So we risk a generation that is cognitively lazy and fluent in generating text, but that are incapable of deep, reflexive, critical thinking. And that’s a huge risk.
The second is related to deepening algorithmic bias and challenging the sovereignty that governments, members, public sector organisations, education institutions, learners and teachers have over the very data that we should be sustaining our sovereignty, our ownership and our control over our own data. Most AI models today are built on a very narrow slice of human culture. And if we depend on these for education, we risk new forms of systemic inequalities.
Without locally owned, culturally relevant AI, we are essentially outsourcing the operating system of our children’s minds to a few technology vendors who own the AI systems. And therefore, the risk of having the data of our children, of our students being available and infringing on the privacy, the safety and the security of children is quite a fundamental risk, as we’ve now increasingly seen across the world. And that therefore becomes quite a fundamental issue that as UNESCO, we need to be responsive to, to ensure that we prevent that kind of risk from escalating in our education systems.
UN News: How can the teachers, the educators, be better prepared for this new environment?
Shafika Isaac: That is quite a critical role that we have to play in terms of supporting our teachers. In this very rapidly emerging AI ecosystem in education, teaching is not about competing with a machine to deliver content. It is about the high level human work of supporting, nurturing, mentoring, providing ethical guidance and social scaffolding in the learning process.
But for that to happen, our teachers need to be cultivating the competencies through continuous teacher professional learning and development. And our professional development systems need to be supporting our teachers to be building these competencies. We need to have our systems move beyond just focusing narrowly on digital and AI literacy, and towards fostering the pedagogical agency of our teachers.
The teachers need to develop the ethical understanding and capabilities to be able to make ethical values based decisions around the use of AI, that they develop the agency and the capability of when not to use it, when to know that an AI is potentially harmful, and when to use it in ways that can enhance their own teaching practice, their own critical agency that can enhance the cognitive struggle and critical thinking of their learners. So, learning how to use AI effectively in a more critical pedagogical sense is an important competency.
UN News: Are there any specific skills that today’s or future students, children, or even older learners need in order to thrive in this changing environment?
Shafika Isaac: The issue of ethics and ethical agency is a crucial competency that we believe needs to be cultivated. We talk a lot about students needing the skills to understand how AI work and to navigate and apply the AI tools, but also learn to be co-creators of AI tools. So not just being passive consumers, but to be part of designing and developing AI tools that can serve themselves, their peers, and their communities and humanity, and reinforcing therefore the cultivation of creativity and critical thinking and agency.
So in short, we believe that our teachers and students need to develop young people as a generation that has not just the technical skills, but the social, emotional, civic capabilities to build a different world, a better world, a world that is more humane, which is proving to be a far more fundamental cause as we are facing so much multiple crises and vulnerabilities that our learners and our teachers are facing across the world, that these values and competencies become absolutely critical and central to be cultivating in our education systems today.
UN News: What do you think, in the best-case scenario, the learning world looks like in the future?
Shafika Isaac: We need to be moving towards a global architecture that are based on some non-negotiable pillars, such as public interest AI over commercial imperatives in education, such as ensuring that we move from harm mitigation to a safer safety by design, architectures that we, in our post-2030 framework, demand safe and ethical AI by design, and that we strengthen the multilateral social contract. The AI divide is the new digital divide, and a renewed multilateralism means that global solidarity is critical to ensuring that AI does not become a tool for technological fragmentation, and that this, we have called in our latest publication on AI and the future of education, that we published in 2025, we actually call for a global commons for AI in education, which is a shared space where infrastructure, open-source models, and pedagogical research are pooled as a common resource for all nations, for all communities, and ensure particularly that we are promoting inclusive, equitable, socially, and environmentally just education systems and global systems, and that is going to be such a crucial battleground over the next coming years, and we think that as UNESCO, as the UN system, and our partners in the defence of education as a global public good is going to become a crucial area to be promoting and cultivating together with our partners in the development space.
Source of original article: United Nations (news.un.org). Photo credit: UN. The content of this article does not necessarily reflect the views or opinion of Global Diaspora News (www.globaldiasporanews.com).
To submit your press release: (https://www.globaldiasporanews.com/pr).
To advertise on Global Diaspora News: (www.globaldiasporanews.com/ads).
Sign up to Global Diaspora News newsletter (https://www.globaldiasporanews.com/newsletter/) to start receiving updates and opportunities directly in your email inbox for free.
























