Rethinking Schooling in the Age of AI: Equity, Ethics, and the Four Pillars of Transformation

Veronica Mobilio , Giulia Guglielmini

Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (1) : 7

PDF (313KB)
Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (1) :7 DOI: 10.1007/s44366-026-0081-3
REVIEW ARTICLE

Rethinking Schooling in the Age of AI: Equity, Ethics, and the Four Pillars of Transformation

Author information +
History +
PDF (313KB)

Abstract

The integration of artificial intelligence (AI) into education marks a critical transition, not only through the adoption of new tools but by challenging the epistemological foundations of teaching and learning. AI reshapes how knowledge is produced, mediated, and evaluated, raising critical questions around equity, agency, and accountability in increasingly data-driven environments. Its emergence compels educators and policymakers to reconsider long-standing assumptions about what counts as learning, how it is measured, and who benefits from technological change. This paper examines how AI is transforming key structures and practices across the broader education landscape, with a particular focus on school education as a strategic and emblematic site for early intervention and pedagogical innovation. While many of the transformations discussed in the paper are relevant across educational levels, schools represent a crucial point where students first experience the cognitive, social, and ethical dimensions of AI, and where systems can act early to promote inclusion, reflection, and readiness. Drawing on major European frameworks, this paper analyzes how AI is reshaping four interdependent pillars of education: curricular content, teaching paradigms, assessment systems, and governance structures. International case-based insights illustrate diverse implementation strategies while also revealing persistent challenges, such as digital inequalities, gaps in teacher preparation, and limited availability of robust mechanisms for algorithmic accountability. Adopting a conceptual, policy-informed approach, this paper synthesizes scholarly literature, European regulatory frameworks, and implementation evidence to propose a systemic view of educational transformation. Rather than framing AI as a mere driver of automation, the paper argues for a transformative approach rooted in equity, human agency, and democratic values. In practical terms, it distills policy-relevant guidance on ethics-by-design, human-in-the-loop safeguards, and capacity building for teachers and school leaders to enable responsible, system-level implementation. The conclusions highlight that, when supported by coherent policy infrastructure and teacher empowerment, school education systems can align technological innovation with inclusive, ethical, and future-oriented learning, ensuring that AI contributes to social justice rather than reinforcing existing inequalities.

Keywords

AI in education / school transformation / educational equity / pedagogical innovation / educational governance

Cite this article

Download citation ▾
Veronica Mobilio, Giulia Guglielmini. Rethinking Schooling in the Age of AI: Equity, Ethics, and the Four Pillars of Transformation. Frontiers of Digital Education, 2026, 3(1): 7 DOI:10.1007/s44366-026-0081-3

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

The integration of artificial intelligence (AI) into education marks a paradigmatic shift in digital transformation. Unlike earlier waves of technological change, AI does not simply support existing pedagogical models; it reshapes how knowledge is produced, mediated, and evaluated (OECD, 2021) while raising important questions about learning, agency, and fairness in algorithmically governed environments (European Commission, 2022; Miao et al., 2021).

AI’s influence permeates multiple layers of education, from intelligent tutoring systems and adaptive content delivery to algorithmic assessment and predictive analytics (Holmes et al., 2019; Luckin, 2018). While these developments promise greater personalization and operational efficiency, they also raise concerns about the erosion of human oversight, the displacement of professional judgment, and the evolving role of teachers in AI-mediated classrooms (OECD, 2023; Williamson & Eynon, 2020). Importantly, these changes intersect with broader policy agendas and structural inequalities, challenging assumptions about what constitutes valuable knowledge and how learning is defined, measured, and rewarded within education systems (Selwyn, 2019; Williamson, 2017).

A growing body of research underscores the ambivalent effects of AI integration in education. On one hand, AI holds the promise of innovation; on the other hand, it may reinforce outdated logics of efficiency, control, and standardization, often at the expense of more relational and reflective forms of learning. Critics warn against deterministic narratives that portray AI as neutral or inevitable, thereby marginalizing educators and narrowing the scope of educational aims (Selwyn, 2021; Williamson & Eynon, 2020). Recent analyses of educational technology (EdTech) realism further emphasize the need to resist techno-solutionist framings and reclaim the pedagogical and ethical agency of teachers in AI-mediated classrooms (Biesta, 2021; Selwyn, 2019; Williamson, 2017).

Concerns have also been raised about the unintended consequences of algorithmic systems for student engagement and well-being. Excessive reliance on automation has been linked to reduced motivation, increased cognitive overload, and diminished teacher–student interaction (Knox, 2023; Kucirkova & Cremin, 2020). Automated feedback, in particular, may weaken pedagogical relationships when it replaces rather than complements human dialogue (Williamson, 2019). As Claxton (2015) noted, such dynamics risk undervaluing essential learner dispositions—such as curiosity, resilience, and critical thinking—that are difficult to quantify but vital for democratic participation and lifelong learning.

These insights call for a cautious and evidence-informed approach to AI adoption in schools, where the developmental, social, and ethical dimensions of learning remain central. As highlighted in UNESCO’s AI and Education: Guidance for Policymakers (Miao et al., 2021) and the European Commission’s Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators (European Commission, 2022), AI systems may replicate bias, compromise privacy, and function through opaque mechanisms—risks that are particularly acute for children and adolescents.

Schools are strategic and emblematic sites of educational transformation. They are shaped by technological change and serve as the first institutional settings where students begin to navigate the realities of digitally mediated learning. By engaging early with the ethical, cognitive, and social dimensions of AI, students are better equipped to participate critically and confidently in a complex, data-driven world. As such, schools represent a crucial entry point for understanding how AI is reshaping education and how educational communities are beginning to respond.

Recent European policy frameworks reflect a growing awareness of both the risks and opportunities of AI in school education. The Digital Education Action Plan 2021–2027 (DEAP) (European Commission, 2020a) outlines a comprehensive strategy for building inclusive and high-quality digital education ecosystems, with particular attention given to AI literacy and teacher training. The STEM Education Strategic Plan (European Commission, 2025) stresses the importance of developing AI-related competencies early on, particularly among underrepresented groups. The European Union (2024) issued the first comprehensive legal framework on AI, which classifies many educational applications as “high risk”, requiring human oversight, transparency, and accountability.

This paper draws on a qualitative analysis of major international policy initiatives and their implementation practices to examine the transformative role of AI in school education. It investigates how AI is reshaping four core pillars of educational systems—curricular content, teaching paradigms, assessment models, and governance structures—each of which interacts dynamically with the others. Adopting a critical and equity-oriented lens, this paper integrates case-based insights with European policy frameworks to highlight the ethical and institutional dimensions of change. In doing so, it seeks to inform current debates on how to ensure the responsible and inclusive integration of AI into education.

2 Methodological Approach

Building on the policy and conceptual context outlined above, this article adopts a qualitative and interpretive design. It is a conceptual and analytical contribution rather than a systematic review or empirical case study. The paper offers a critical, policy-informed synthesis of how AI is reshaping schooling, organized around four interdependent pillars—curriculum, pedagogy, assessment, and governance—examined through the lenses of equity, agency, and accountability.

The knowledge base integrates three families of sources: (i) scholarly literature published between 2015 and 2025 on AI in education, digital pedagogy, assessment, algorithmic governance, and AI ethics/critical data studies, capturing a decade in which AI policy frameworks and educational applications have rapidly evolved; (ii) European and international policy frameworks and selected ministerial guidance; and (iii) implementation insights from European initiatives and school-level programs, which are used as brief illustrative cases to surface mechanisms, enabling conditions, and trade-offs.

The analytical strategy combines conceptual synthesis and comparative interpretation: first, by identifying recurring ideas and tensions across the three source families; second, by framing them within the four pillars to highlight ethical and systemic challenges; and third, by drawing out cross-pillar interdependencies (for example, how governance choices shape curriculum and assessment, and how assessment logics feed back into pedagogy). Sources were purposively selected for influence, recency, and relevance to European schooling, prioritizing peer-reviewed syntheses and official policy or standards documents. No formal quality appraisal was undertaken, which we acknowledge as a limitation consistent with the conceptual purpose of the article.

The paper does not seek empirical generalizability but analytical coherence and policy relevance. Its scope is primarily European, and transferability beyond the EU requires contextualization. Given the fast-moving regulatory and technological landscape, the analysis emphasizes principles and system-level implications rather than tool-specific recommendations.

Analytically, the four pillars are anchored in debates on platformization and datafication in education, teacher agency, and algorithmic accountability, which are mobilized as interpretive lenses rather than as the basis for formal theory building (Selwyn, 2012). The following sections apply this framework to examine how AI is transforming each pillar and the interdependencies among them.

3 Rethinking Educational Purpose and Core Competencies

The integration of AI into school education accelerates existing and ongoing shifts in the purposes and content of learning. Education systems have long faced pressure to move beyond the industrial-era model—centered on standardization and largely passive knowledge transmission—to respond to the skills demands of societies that are increasingly shaped by automation and technological interdependence. AI amplifies this urgency by making visible the need for competencies that are ethical, interdisciplinary, and reflective in nature.

Beyond basic digital literacy or tool proficiency, students must develop the ability to understand how algorithmic systems work, how they influence social and cognitive processes, and how to engage with them critically and responsibly. In this sense, AI literacy should be conceived not only as technical know-how but as a sociotechnical competence that includes understanding how data are collected and used, how algorithms are trained, and how their outputs shape opportunities and influence decisions (European Commission, 2022; Miao et al., 2021).

International policy frameworks support this view. The OECD (2021) highlights “epistemic performance” and “metacognitive flexibility” as essential learner outcomes in AI-mediated contexts. UNESCO (Miao et al., 2021) calls for educators to foster learners’ civic and ethical capacities alongside digital skills, while the European Commission (2022) encourages schools to embed transparency and ethical reflection into curricula through a whole-school approach.

These shifts raise both philosophical and practical questions about what should be taught and how. Traditional curriculum design, anchored in rigid disciplinary boundaries, often struggles to accommodate the integrative demands of AI-related education. Understanding algorithmic bias, for example, requires technical knowledge as well as insights from ethics, history, and the social sciences. As such, preparing students for AI-mediated realities calls for pedagogical practices that cut across disciplines, engage with real-world problems, and empower learners as active co-constructors of knowledge (European Commission, 2022). The goal is not simply to reinforce STEM subjects but to reconfigure them by embedding critical inquiry and ethical reasoning into broader learning processes (OECD, 2021). Informatics education, in particular, deserves greater attention. Often conflated with computer science, informatics provides a conceptual foundation for understanding information systems, abstraction, modeling, and the logic underpinning algorithmic processes—core elements for meaningful AI literacy (Miao et al., 2021).

European and international initiatives have begun to frame AI education not just as skills acquisition but as systemic curricular transformation. The Informatics for All initiative (Caspersen et al., 2018) calls for the reorganization of informatics as a foundational discipline, while the OECD (2023) advocates for whole-system strategies that integrate ethical reflection, teacher preparation, and inclusive pedagogy.

Countries such as Finland and Estonia have already begun integrating informatics and AI awareness into general education, starting at the primary level (European Commission, 2022; OECD, 2021). These cases suggest that early and inclusive exposure to informatics can foster both technical fluency and critical engagement with the affordances and risks of AI. As such, curricular reforms should prioritize epistemological, reflective, and dialogic competencies that enable students to understand how AI systems are trained, how they mediate information flows, and how bias or opacity can emerge.

Importantly, these reforms must be inclusive and equity sensitive. Without intentional efforts, AI education risks reproducing existing gender, socioeconomic, and racial inequalities. The STEM Education Strategic Plan (European Commission, 2025) stresses the importance of widening participation—especially among girls and underrepresented groups—by embedding AI-related learning across the curriculum rather than limiting it to specialized or elective tracks. Inclusive access to AI education is a prerequisite for preparing all learners to navigate and shape digital futures. Rather than simply adding AI as a new curricular topic, the challenge is to rethink educational aims and content in light of AI’s pervasive influence, developing learners’ ethical discernment, civic awareness, and capacity to critically engage with the systems that increasingly govern their lives.

4 Adaptive Learning Systems and the Role of Teachers

One of the most visible applications of AI in school education is the development of adaptive learning systems. These systems promise to revolutionize the classroom by offering personalized learning experiences tailored to each student’s pace, prior knowledge, and preferred mode of engagement. By leveraging data analytics and machine learning, AI tools can provide real-time feedback, diagnose learning gaps, and recommend individualized learning paths (Holmes et al., 2019; Luckin & Holmes, 2016). In this way, they offer the possibility of moving beyond one-size-fits-all instruction and toward more inclusive and responsive pedagogical models.

Several countries have begun integrating such technologies into formal education systems. In Estonia, for instance, platforms like Opiq are used to deliver digital textbooks that adjust to learners’ progress and offer teachers actionable insights. In Finland, Kubo Robotics has been introduced to support early computational thinking through playful, adaptive interaction. In the UK, Century Tech provides an AI-powered platform that monitors students’ performance across subjects, suggesting targeted exercises and materials while enabling teachers to track progress through detailed dashboards. These cases demonstrate the growing relevance of AI tools in everyday schooling and their capacity to support more differentiated instruction (European Commission, 2022; Holmes et al., 2019; Luckin, 2018).

While adaptive systems expand the possibilities for individualized learning, they also raise critical questions about the evolving role of teachers. Too often, personalization is framed narrowly in terms of efficiency and automation, sidelining the relational and interpretive dimensions of teaching (Selwyn, 2019; Williamson, 2017). Personalization in education is not solely about matching students to the right content; it is about creating meaningful learning experiences, scaffolding understanding, and fostering motivation (Holmes et al., 2019; Luckin, 2018). Teachers remain central to these processes, not only as content facilitators but as ethical and pedagogical mediators who contextualize technology use and make professional judgments that no algorithm can replicate (Miao et al., 2021; OECD, 2023). Recent OECD policy work highlights that AI adoption often redistributes teachers’ workload rather than reducing it, introducing new professional tasks such as interpreting dashboards, validating algorithmic recommendations, and curating data-informed feedback—demands that require targeted training and supportive governance structures (Borgonovi et al., 2025; Roy et al., 2024).

Far from rendering the teacher obsolete, AI systems—when appropriately designed and implemented—can enhance teachers’ capacity to respond to student needs (Holmes et al., 2019), freeing up their time for formative assessment and differentiated support. As Selwyn (2021) argued, the value of AI in classrooms lies less in automation and more in how it supports teachers’ professional judgment and relational work.

UNESCO (Miao et al., 2021) emphasizes this dual function, calling for teacher agency to be preserved and strengthened, not diminished, in AI-mediated classrooms. Similarly, the European Commission (2022) warns against systems that “black box” learning processes and displace educator responsibility.

Experimentation with adaptive learning platforms has demonstrated that AI-based tools can support both personalized learning and inclusive pedagogical approaches, provided that teacher mediation remains central. One example is the Rhapsode pilot, implemented in Italian lower secondary schools in 2024 (The pilot was implemented by Fondazione per la Scuola, a research and training organization working on quality and inclusive education at the primary and secondary levels, in collaboration with Area9 Lyceum, an international provider of adaptive learning solutions grounded in cognitive modelling). The platform, grounded in cognitive science, delivers personalized learning trajectories by adjusting content in real time based on student performance and self-assessed confidence levels. It integrates metacognitive feedback, competence mapping, and dashboards for teacher monitoring. The teachers involved in the project reported that the platform effectively supported differentiated instruction and classroom management, helping them respond to students’ diverse learning needs. While the pilot reported notable improvements in mathematics performance, its small scale, short duration, and non-public dataset limit its generalizability. It is therefore presented here as an illustrative example rather than evaluative evidence, highlighting how the pedagogical potential of adaptive systems emerges only when teachers play an active mediating role—interpreting data, contextualizing content, and sustaining student motivation.

This experience aligns with other international experiences, such as AQA’s Stride program in the UK (Bimpeh, 2024), which has demonstrated high validity and reliability in assessing students’ competence through adaptive diagnostics. When designed and implemented with pedagogical intentionality, such systems can meaningfully support formative assessment, strengthen metacognitive skills, and promote greater equity in learning. Similarly, India’s DIKSHA platform (Digital Infrastructure for Knowledge Sharing) illustrates how large-scale teachers’ professional development, content localization across multiple languages, and low-bandwidth design support digital education on a national scale and enable adaptive, data-rich ecosystems in resource-constrained settings (Kar, 2023).

As adaptive systems become more embedded in educational practice, a central challenge lies in safeguarding teachers’ professional autonomy in the face of increasingly data-driven and automated decision-making processes. Adaptive systems typically rely on preset learning goals, data-driven recommendations, and automated feedback loops, which may limit opportunities for pedagogical innovation or critical reflection. Moreover, when students’ progress is mediated primarily through interaction with algorithms, the motivational and emotional dimensions of learning risk being overlooked. Research increasingly shows that sustained engagement in learning depends not only on personalization but also on the quality of the student–teacher relationship, emotional safety, and a sense of belonging in the classroom (Claxton, 2015; Kucirkova & Cremin, 2020).

This is particularly true in diverse or complex classrooms, where students may bring varied linguistic backgrounds, cognitive profiles, and socioemotional and learning needs. In such contexts, the nuanced judgment of teachers is essential for interpreting data outputs, addressing equity issues, and ensuring that adaptive technologies do not reinforce existing biases or perpetuate narrow definitions of “success”. As AI systems become more prevalent, it is critical that teachers are positioned as co-designers of pedagogical strategies that integrate digital tools meaningfully and inclusively.

Furthermore, the introduction of AI in education calls for substantial investments in teacher training and professional development. Teachers must be supported not only in learning how to use AI tools but also in understanding their underlying logic, limitations, and ethical implications. As the DEAP (European Commission, 2020a) notes, building capacity among educators is key to ensuring effective and responsible AI adoption. This includes fostering digital pedagogical skills, data literacy, and a critical mindset toward algorithmic systems.

Ultimately, the promise of adaptive learning systems depends on how they are integrated into broader pedagogical ecosystems. When AI is treated as a complement to professional expertise, it can support more equitable and responsive educational environments. This, however, demands clear ethical standards, strong institutional backing, and—above all—a renewed recognition of teachers as relational professionals who guide students through AI-mediated learning.

5 Rethinking Curricular Content and Pedagogical Models

While previous sections have examined the core competencies required in AI-mediated societies and the evolving role of teachers, this section considers the deeper structural and cultural shifts needed in curriculum design and pedagogy. In particular, it explores how AI challenges long-standing assumptions about how knowledge is organized, delivered, and validated in school systems and how educational institutions might respond.

The integration of AI into education brings with it a logic of datafication, modularity, and non-linear progression that contrasts sharply with the age-graded and subject-segmented structures that still dominate most curricula (OECD, 2021). Rather than simply adding new content onto existing frameworks, schools may need to rethink their architectures more holistically, by adopting approaches that are more flexible and student-centered (Fullan et al., 2020; Miao et al., 2021).

This reconfiguration implies at least three interdependent shifts. First, curricular content must become more adaptable, integrative, and oriented toward real-world problem-solving. Rather than treating emerging domains as add-ons, curricula should embed them as cross-cutting lenses through which existing disciplines are reinterpreted. This aligns with international frameworks that advocate for curricular hybridity, where disciplinary knowledge is connected to real-world sociotechnical challenges (Caspersen et al., 2018; European Commission, 2022).

Second, pedagogical models must shift from passive knowledge transmission to dynamic knowledge construction. AI can support this transition by enabling personalized pathways and scaffolding, but only if it is used within participatory and inquiry-based pedagogies that prioritize student agency, collaboration, and metacognition (Holmes et al., 2019; Knox, 2023; Luckin & Holmes, 2016). Approaches such as project-based learning and design thinking offer promising formats, particularly when supported by digital ecosystems that enable students to explore content alongside their peers and reflect on their own learning.

Third, any reform of curriculum and pedagogy must also contend with institutional constraints and cultural assumptions. Despite growing enthusiasm for innovation, schools remain embedded in systems that privilege standardization, efficiency, and measurable outcomes. These pressures can limit the uptake of pedagogical experimentation and reinforce normative models of success (Williamson & Eynon, 2020). Moreover, technologies themselves are not neutral; if designed without attention to context and equity, they risk reproducing deficit-based assumptions, particularly for students from marginalized backgrounds (Eubanks, 2018; Kafai et al., 2018). Culturally responsive pedagogy and participatory design are therefore essential to ensure that AI-enabled learning environments affirm diverse identities and epistemologies.

In summary, rethinking curricula and pedagogy in the age of AI is not simply a matter of technical integration or content updating. It is a deeply cultural and institutional undertaking that requires education systems to become more dialogic, inclusive, and future-oriented. This transformation must be guided by pedagogical values, not technological affordances, and must place equity, participation, and epistemic justice as core design principles. As Biesta (2021) noted, the purpose of education lies in cultivating human agency and responsibility, not merely in producing efficient learners. In this sense, algorithmic accountability should be treated as a pedagogical value, ensuring that AI systems remain open to ethical scrutiny and serve the broader aims of education. These structural and cultural shifts inevitably intersect with how learning is assessed and governed, shaping the broader ecosystem within which curricula and pedagogy evolve.

6 Assessment Systems and Governance Structures

The rise of AI in education is transforming not only how students learn and teachers teach, but also how learning is assessed and how educational systems are governed. Among the most impacted areas are assessment systems and governance structures—two domains increasingly interlinked through data-driven and algorithmically mediated processes. While these developments offer opportunities for innovation, they also raise fundamental questions about transparency, fairness, and democratic accountability.

In the field of assessment, AI enables new forms of data collection and analysis. These developments build on earlier traditions in educational data mining and learning analytics, which laid the groundwork for using large-scale learner data to model performance and support data-informed interventions (Baker & Inventado, 2014). Adaptive platforms can monitor students’ performance in real time, generate predictive analytics to identify learning gaps, forecast outcomes, and propose tailored remedial strategies. Automated feedback systems can deliver immediate formative input, supporting more responsive and individualized learning trajectories. These capabilities have the potential to strengthen the formative function of assessment and scale up personalized education (European Commission, 2022; Miao et al., 2021).

However, this increasing reliance on AI in assessment introduces critical risks. Many algorithmic systems operate as “black boxes”, with limited transparency about how data are interpreted, how judgments are made, and on what grounds decisions are based. Such opacity can compromise the legitimacy and contestability of assessment outcomes, reinforcing what Andrejevic (2019) called the “automation of surveillance”, where algorithmic monitoring substitutes for pedagogical judgment. Moreover, when these tools are trained on historical data that reflect systemic inequalities, they risk reproducing, or even amplifying existing biases (Claxton, 2015; Renta‐Davids et al., 2025). Additionally, an overemphasis on quantifiable indicators can narrow the scope of what is valued in education by privileging measurable outputs over complex, dispositional, and relational competencies that are less amenable to algorithmic capture. Recent policy analysis suggests that some education systems are responding to these challenges by revaluing oral and dialogic forms of assessment, which are less susceptible to misuse of generative AI tools and better support higher-order skills, including critical reasoning, argumentation, and persuasion (Borgonovi et al., 2025).

These concerns are particularly pressing in high-stakes educational contexts, such as grading, student tracking, or access to selective pathways. Acknowledging these risks, the European Union (2024) classifies many educational applications of AI, particularly those related to assessment and students’ profiling, as “high risk”. It mandates compliance with strict requirements, including human oversight, transparency, documentation, and auditing. It also prohibits certain applications, such as emotion recognition, in educational settings, underscoring the importance of safeguarding human dignity and fundamental rights. These legal provisions mark a turning point, embedding ethical principles into the regulation of AI in education.

Alongside assessment, governance is undergoing a parallel transformation. Whereas traditional models of educational governance are typically centralized and hierarchical, AI-enabled systems support forms of decision-making that are more dynamic and data-driven.

Algorithms and digital platforms are increasingly shaping decisions across multiple levels: school leadership, instructional planning, regional strategy, and national policy (European Commission, 2025; OECD, 2021). This ongoing datafication and platformization of governance mechanisms raises new questions about the public accountability and transparency of educational decision-making (Williamson, 2019).

While these systems may enhance responsiveness and support evidence-informed policy, they also introduce significant challenges. First, they shift authority from teachers and public institutions to private technology providers, whose algorithms mediate what is taught, how learning is measured, and when interventions are triggered. This raises pressing concerns about educational sovereignty: Who designs and governs these systems, and in whose interests? Without clear public oversight and participatory governance, there is a risk that educational priorities will become subordinated to commercial interests, undermining the public mission of schooling. As Couldry and Mejias (2019) argue, data extraction and platform dependence can transform education into a site of “data colonialism”, where informational asymmetries reproduce systemic inequalities.

For instance, in some school systems, predictive analytics platforms—originally introduced to support early warning interventions—have been used to generate automated risk profiles for students, influencing their access to advanced coursework or support services, and often without transparent criteria or adequate human oversight (Borgonovi et al., 2025; Bowers, 2021; Williamson & Eynon, 2020; Zeide, 2017). Similarly, EdTech procurement processes have at times prioritized proprietary vendor solutions without involving educators or local authorities, resulting in tools that poorly align with pedagogical needs or contextual realities (OECD, 2021; Williamson & Hogan, 2020). Democratic governance of AI in education requires not only regulatory safeguards but also institutional capacities to assess, negotiate, and, when necessary, co-develop alternatives to market-driven solutions (Boeskens & Meyer, 2025; Borgonovi et al., 2025).

To mitigate these risks, multilevel and participatory governance models are essential. Such models should embed algorithmic accountability, ensure stakeholder involvement, and build public trust. Key actions include developing shared evaluation frameworks, ensuring algorithmic transparency, incorporating regular impact assessments for high-risk AI systems, and enabling teachers, students, and families to understand and contest automated decisions. Building such governance structures also requires dedicated public infrastructure. National observatories or ethics councils for AI in education can support the independent evaluation of tools and practices, facilitate interdisciplinary dialogue, and strengthen institutional legitimacy. Examples from Finland and the Netherlands, where independent agencies vet digital tools for ethical use, illustrate how such mechanisms can reinforce public trust and accountability.

Importantly, governance strategies must be sensitive to local diversity and promote capacity building at all levels. School leaders play a pivotal role in this process. Positioned at the intersection of policy, pedagogy, and organizational management, they are uniquely equipped to mediate between national strategies and local implementation. Empowering school leaders with the knowledge and authority to critically assess AI tools, negotiate their adoption, and foster collaborative governance practices is essential to ensure that AI integration aligns with educational values and institutional missions (Harris & Jones, 2020; OECD, 2023).

Schools need support in adopting AI tools and shaping their design and implementation. This calls for co-design approaches in which teachers, technologists, researchers, and policymakers collaboratively define the goals and evaluation criteria of AI systems. Without such inclusive processes, AI may reinforce inequalities and weaken both professional agency and institutional trust (Luckin, 2018; Popenici & Kerr, 2017). Ultimately, integrating AI into assessment and governance is not just a technical or procedural shift; it is a normative and institutional transformation. As decisions become increasingly data-driven, we must ask: Who decides what counts as learning? How is it measured? And in whose interests? AI can support educational equity and effectiveness, but only if deployed within governance systems that are transparent, participatory, and aligned with democratic values (Eubanks, 2018).

Taken together, the four pillars reveal strong interdependencies. Governance choices around procurement and auditing shape what counts as valid evidence in assessment; assessment logics, in turn, influence classroom practice and the feasible scope of curriculum design. Tensions often emerge between transparency and performance, personalization and curriculum coherence, public accountability and vendor lock-in. These are not merely operational trade-offs but structural interdependencies that reflect deeper epistemic and institutional dynamics within education systems (Williamson, 2019). Recognizing and managing them coherently is a prerequisite for equitable and sustainable AI integration, setting the stage for the systemic challenges discussed in the following section.

7 Addressing Systemic Challenges

While AI holds considerable promise for enhancing personalization, equity, and efficiency in education, its responsible and effective integration is hindered by deep-seated systemic barriers. These challenges are not merely technological; they are structural, institutional, and sociopolitical. If left unaddressed, they risk amplifying existing educational inequalities rather than reducing them (Selwyn, 2019).

A primary challenge is digital inequality (OECD, 2020). Despite recent progress, many education systems still suffer from inadequate broadband connectivity, outdated devices, and limited access to digital tools, especially in rural areas and low-income communities (Borgonovi et al., 2025; Miao et al., 2021). For example, while Southern European countries such as Spain and Greece have expanded digital infrastructure, substantial regional disparities persist in school connectivity and access to updated resources. As the DEAP (European Commission, 2020a) highlights, the digital divide risks becoming an AI divide. When students lack consistent access to digital environments, AI-driven technologies cannot deliver on their promises of personalization or inclusion.

Closely related is the gap in teacher preparation and professional development. The use of AI in classrooms demands not just technical familiarity but also a pedagogical and ethical understanding of algorithmic systems. However, many teachers report insufficient training in these areas. According to the OECD (2021), fewer than 40% of teachers in member countries have received training on the use of digital tools for personalized learning. Similarly, evidence from the European Commission Open Public Consultation conducted in preparation of the DEAP suggests that only a small share of teachers responding to the consultation felt well prepared to integrate advanced technologies, such as AI into their pedagogical practice. The consultation also highlighted that national strategies in countries such as Greece and Italy have largely focused on general digital literacy, with limited attention to AI-specific issues such as algorithmic bias and ethical risks. In contrast, countries like Estonia have launched ambitious capacity-building programs that equip educators to critically understand, use, and co-design AI tools (European Commission, 2020a & 2020b). Without this depth of preparation, teachers may be ill-equipped to interpret algorithmic outputs or challenge problematic recommendations, potentially reinforcing automation biases (Borgonovi et al., 2025).

Another systemic barrier lies in the absence of robust legal, ethical, and technical frameworks to guarantee algorithmic transparency and contestability. Most AI tools used in education do not offer users—students, teachers, and families—accessible explanations of how decisions are made, on what basis, or with what level of uncertainty. This threatens the principles of due process and institutional trust. The European Union (2024) has taken a major step in this regard by mandating transparency and human oversight for high-risk systems, including those used in educational assessment and decision-making. However, implementation remains uneven, as many education authorities lack the institutional capacity and technical expertise to verify compliance, monitor risks, and ensure accountability.

These challenges interact in compounding ways. Students in under-resourced schools are more likely to lack access to AI tools, be supported by teachers with limited training, and be subject to automated decisions without adequate safeguards (OECD, 2021). The cumulative effect risks deepening structural inequalities, undermining efforts toward inclusive education and violating the principles of fairness and nondiscrimination enshrined in international human rights frameworks (Miao et al., 2021).

Addressing these challenges requires coordinated action at multiple levels. Governments must invest in equitable infrastructure, make teacher training on AI mandatory and meaningful, and ensure enforceable rights to explanation and contestation. At the institutional level, schools should build the internal capacity to critically assess the pedagogical value and ethical soundness of AI tools. Public procurement processes, for example, can embed requirements for algorithmic explainability, bias mitigation, and accessibility. As previously described, countries such as Finland and the Netherlands offer promising examples, embedding ethical criteria in procurement guidelines or relying on independent observatories and ethics committees to guide implementation.

Beyond individual barriers, these challenges are inherently systemic. The four pillars discussed throughout this paper—curriculum, pedagogy, assessment, and governance—are deeply interdependent and must evolve coherently if AI is to serve educational equity and innovation. Decisions made in one domain inevitably affect the others; governance frameworks influence what can be taught and how data are managed, assessment logics shape classroom practices and curricular priorities, and pedagogical reforms depend on institutional flexibility and teacher agency. Misalignment among pillars—for instance, the introduction of advanced assessment tools without corresponding curricular adaptation or professional development—can generate resistance, superficial adoption, or unintended inequities. Recognizing these interconnections is therefore essential for designing sustainable AI-in-education strategies that are pedagogically meaningful, ethically sound, and institutionally coherent. System-level coordination—rather than isolated innovation—remains the key condition for aligning technological change with democratic and inclusive educational values.

Crucially, systems must adopt a proactive—not reactive—stance. The window for shaping AI integration in education is narrow. Without strategic action, tomorrow’s classrooms risk exacerbating the very inequities that AI promises to overcome.

While this paper focuses primarily on Europe, it is important to recognize that systemic challenges to AI integration in education manifest differently across global contexts. In many parts of the Global South, infrastructure fragility, linguistic diversity, and concerns around data colonialism create complex dynamics that shape the design and use of AI tools. Similar challenges take distinct forms elsewhere but often intersect with issues of equity and access. Experiences from initiatives such as India’s DIKSHA platform and Africa’s AI4D program offer valuable perspectives on context-sensitive and community-driven AI adoption, illustrating alternative approaches to digital innovation rooted in local needs and capacities. These global perspectives underscore that addressing systemic challenges is not only a matter of technical capacity but of shared ethical and political commitment—a theme that ultimately points to a broader rethinking of what education is for.

8 Conclusions

AI represents a defining force in the future of education—not because it automates but because it invites us to rethink what education is for, how it operates, and whom it serves. Far from being a neutral technical upgrade, AI challenges educators and policymakers to confront enduring questions about equity, agency, knowledge, and care in learning environments increasingly mediated by algorithms. Its promise lies not in optimization or efficiency but in its potential to reorient education around democratic, human-centered, and ethically grounded values. Throughout this paper, we have examined how AI is reshaping four interdependent pillars of education: the redefinition of curricular content, the transformation of teaching paradigms, the emergence of new assessment systems, and the evolution of governance structures. These dimensions must evolve in concert, as coherence among them is essential to sustain meaningful innovation. When treated in isolation, technological reforms risk producing fragmentation, superficial adoption, or new forms of inequality.

Schools are at the heart of this transformation. They are not passive recipients of digital innovation but active sites where technologies are interpreted, negotiated, and embedded into everyday practices (Selwyn, 2012 & 2019; Williamson & Eynon, 2020). It is within schools that students experience opportunities and tensions of AI, and where their understanding of fairness, responsibility, and participation begins to take shape (Miao et al., 2021). Equipping and trusting schools—and the professionals within them—to lead this process is therefore necessary (OECD, 2021).

Seizing the transformative potential of AI requires strategic investment, teacher empowerment, and policy coherence. Addressing the structural barriers outlined in the previous section is not optional; it is foundational to any meaningful AI integration strategy. This entails the adoption of clear and enforceable policy frameworks, including embedded ethics-by-design principles in tool development, systematic human-in-the-loop safeguards to preserve professional and pedagogical judgment, and ex-ante impact assessments for high-risk AI systems in education.

Ultimately, shaping the future of AI in education is a societal project. It calls for an education agenda that is not only digitally competent but also socially just and ethically committed. The future of education is at stake—and whether that future will uphold the principles of equity, dignity, and democratic engagement that education itself is meant to sustain.

References

[1]

Andrejevic, M. (2019). Automated media. New York: Routledge.

[2]

Baker, R. S., Inventado, P. S. (2014). Educational data mining and learning analytics. In: Larusson, J. A., White, B, eds. Learning analytics. New York: Springer, 61–75.

[3]

Biesta, G. (2021). World-centred education: A view for the present. New York: Routledge.

[4]

Bimpeh, Y. (2024). Investigating the validity and reliability of Stride’s diagnostic tests. London: AQA.

[5]

Boeskens, L., Meyer, K. (2025). Policies for the digital transformation of school education: Evidence from the policy survey on school education in the digital age. OECD Education Working Papers, No. 328.

[6]

Borgonovi, F., Bastagli, F., Ochojska, M., Piumatti G. (2025). AI adoption in the education system: International insights and policy considerations for Italy. OECD Artificial Intelligence Papers, No. 52.

[7]

Bowers, A. (2021). Pushing the frontiers with artificial intelligence, blockchain and robots. OECD digital education outlook 2021. Paris: OECD Publishing.

[8]

Caspersen, M. E., Gal-Ezer, J., McGettrick, A., Nardelli, E. (2018). Informatics for all the strategy. New York: ACM.

[9]

Claxton, G. (2015). Educating Ruby: What our children really need to learn. Carmarthen: Crown House Publishing.

[10]

Couldry, N., Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford: Stanford University Press.

[11]

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

[12]

European Commission. (2020a, September 30). Digital education action plan 2021–2027: Resetting education and training for the digital age. Available from European Union website.

[13]

European Commission. (2020b, September 30). Commission staff working document: Accompanying the document digital education action plan 2021–2027. Available from European Union website.

[14]

European Commission. (2022, October 25). Ethical guidelines on the use of AI and data in teaching and learning for educators. Available from European Union website.

[15]

European Commission. (2025, March 5). A STEM education strategic plan: Skills for competitiveness and innovation. Available from European Union website.

[16]

European Union. (2024, July 12). Regulation (EU) 2024/1689 of the European Parliament and of the Council. Available from European Union website.

[17]

Fullan, M., Quinn, J., Drummy, M., Gardner, M. (2020). Education reimagined: The future of learning. A collaborative position paper between New Pedagogies for Deep Learning and Microsoft Education.

[18]

Harris, A., Jones, M. (2020). COVID 19—School leadership in disruptive times.School Leadership & Management, 40(4): 243–247

[19]

Holmes, W., Bialik, M., Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Boston: Center for Curriculum Redesign.

[20]

Kar, S. (2023). Digital infrastructure for knowledge sharing—DIKSHA: A review. Journal of Data Science, Informetrics, and Citation Studies, 2(2), 143–145.

[21]

Kafai, Y. B., Fields, D. A., Searle, K. A. (2018). Understanding media literacy and DIY creativity in youth digital productions. In: Mihailidis. P., & Hobbs, R., eds. The international encyclopedia of media literacy. New York: Wiley.

[22]

Knox, J. (2023). AI and education in China: Imagining the future, excavating the past. London: Routledge.

[23]

Kucirkova, N., Cremin, T. (2020). Children reading for pleasure in the digital age: Mapping reader engagement. London: SAGE Publications.

[24]

Luckin, R. (2018). Machine learning and human intelligence: The future of education for the 21st century. London: UCL IOE Press.

[25]

Luckin, R., Holmes, W. (2016). Intelligence unleashed: An argument for AI in education. London: Pearson Education.

[26]

Miao, F., Holmes, W., Huang, R., Zhang, H. (2021). AI and education: Guidance for policymakers. Paris: UNESCO.

[27]

OECD . (2020). Education at a glance 2020: OECD indicators. Paris: OECD Publishing.

[28]

OECD . (2021). AI and the future of skills, Volume 1: Capabilities and assessments. Paris: OECD Publishing.

[29]

OECD . (2023). AI and the future of skills, Volume 2: Methods for evaluating AI capabilities. Paris: OECD Publishing.

[30]

Popenici, S. A. D., Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education.Research and Practice in Technology Enhanced Learning, 12(1): 1–13

[31]

Renta-Davids, A., Camarero-Figuerola, M., Camacho, M. (2025). Navigating the challenges and opportunities of artificial intelligence in educational leadership: A scoping review.Review of Education, 13(2): e70101

[32]

Roy, P., Poet, H., Staunton, R., Aston, K., Thomas. D. (2024). ChatGPT in lesson preparation: A teacher choices trial. Berkshine: National Foundation for Educational Research.

[33]

Selwyn, N. (2012). Making sense of young people, education and digital technology: The role of sociological theory.Oxford Review of Education, 38(1): 81–96

[34]

Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Cambridge: Polity Press.

[35]

Selwyn, N. (2021). Education and technology: Key issues and debates. 3rd ed. London: Bloomsbury Academic.

[36]

Williamson, B. (2017). Big Data in education: The digital future of learning, policy and practice. London: SAGE Publications.

[37]

Williamson, B. (2019). Datafication of education: A critical approach to emerging analytics technologies and practices. In: Beetham, H., & Sharpe, R., eds. Rethinking pedagogy for a digital age. Beetham, H., & Sharpe, R., eds. Rethinking pedagogy for a digital age. London: Routledge.

[38]

Williamson, B., Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235.

[39]

Williamson, B., Hogan, A. (2020). Commercialisation and privatisation in/of education in the context of COVID-19. Paris: UNESCO.

[40]

Zeide, E. (2017). The structural consequences of Big Data-driven education.Big Data, 5(2): 164–172

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (313KB)

392

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/