1 Introduction
Education systems across the world are undergoing sustained reform pressures driven by digitalisation, demographic change, labour-market volatility, and growing expectations of equity and accountability (
Liu, 2021;
OECD, 2015). Over the past two decades, governance reforms have sought to balance central steering with local autonomy, combining national standards with school-level flexibility in curriculum, pedagogy, and organisation (
OECD, 2015). These arrangements, often described as reconfigured hybrid governance models, were developed in a context where digital technologies primarily supported existing educational processes rather than reshaping their underlying logic.
However, the rapid diffusion of AI in education marks a qualitative shift in this landscape. Unlike earlier digital tools, AI systems intervene in core pedagogical domains (including curriculum design, assessment, feedback, student profiling, and instructional decision-making) and organisational functions (such as administrative decision support, performance monitoring, resource allocation, and early-warning analytics). In doing so, AI challenges the established assumptions about where authority resides, how accountability is exercised, and who governs educational practice.
Unlike previous waves of educational technology, AI does not merely enhance administrative efficiency or digitise existing instructional practices. AI systems are increasingly being designed and piloted to perform functions that were traditionally reserved for human judgment and school authority, such as generating curricular content, recommending pedagogical pathways, supporting assessment, and influencing learning trajectories in real time, as documented in selected policy initiatives, pilot programmes, and educational platforms. As a result, some decision-making processes embedded in algorithms are designed and governed outside formal education systems, yet materialise directly within classrooms and learning environment. This blurring of boundaries between policy design, technological development, and pedagogical enactment exposes a structural mismatch between the established governance frameworks and the realities of AI-mediated education.
In this paper, a reconfigured hybrid governance model is understood as a school arrangement in which core principles, safeguards, and system-level conditions, such as curriculum aims, data protection, accountability mechanisms, and ethical standards, are defined centrally, while pedagogical implementation, tool selection, and contextual adaptation are exercised at decentralized levels, particularly by school leaders and teachers. Importantly, this definition treats governance as the structuring of decision rights, oversight mechanisms, and accountability relationships, rather than as the direct exercise of agency by any single actor. While such models have historically balanced central steering with local autonomy, the growing influence of AI introduces a systemic and transversal actor of governance that cuts across this division, challenging existing assumptions about authority, responsibility, and accountability in education systems.
A growing body of scholarship has examined AI in education from pedagogical, ethical, and technological perspectives, highlighting both its potential to enhance learning and the risks it poses in terms of bias, privacy, and inequality (
Aberšek et al., 2023;
Holmes et al., 2019;
Luckin et al., 2016). Critical contributions have drawn attention to digitalisation, platformisation, and datafication (
Filgueiras, 2024), warning that AI-enabled personalisation may lead to the homogenisation of learning processes, the narrowing of epistemic horizons, and the erosion of students’ intellectual autonomy through continuous monitoring and algorithmic profiling. While these analyses provide essential normative and socio-technical insights, they often remain weakly connected to questions of education governance, particularly regarding how authority, responsibility, and accountability should be redistributed when AI systems operate as semi-autonomous actors within education systems. As a result, existing governance models struggle to account for how AI simultaneously centralises control through AI infrastructures and digital platforms while decentralising pedagogical action at the level of schools and classrooms. This tension constitutes the core governance problem addressed in this paper. This study addresses this gap by advancing a governance-focused analytical framework that conceptualises AI as a systemic and transversal actor understood not as an autonomous decision-maker, but as a socio-technical system whose embedded rules, optimisation logics, and data dependencies shape decisions across multiple governance levels and examines how hybrid governance arrangements must be reconfigured to safeguard professional agency, student autonomy, and public educational values in AI-mediated environments.
Within this framework, two analytically distinct but interrelated elements of AI integration are central: AI content and AI tutors. These categories are used analytically to distinguish governance effects associated with different AI functions, rather than to suggest discrete or autonomous systems in practice. First, AI content refers to educational materials—texts, tasks, feedback, explanations, and assessments—generated or dynamically adapted by or to AI systems, building on earlier work on cognitive modelling and AI-supported knowledge representation in education (
Aberšek, 2024;
Drožđek & Pesek, 2024). Unlike traditionally curated curricular resources, AI content is continuously produced, personalised, and updated through algorithmic processes, often without direct school validation. This raises governance concerns related to curricular coherence, epistemic authority, quality assurance, and the risk of homogenised knowledge production shaped by dominant data patterns. Second, AI tutors, by contrast, denote AI systems that interact directly with students to guide, scaffold, and adapt to learning processes, aligning with research on hybrid intelligence and human–AI collaboration in learning and teaching (
Cukurova, 2025;
Luckin et al., 2022). Acting as quasi-pedagogical agents, AI tutors influence pacing, sequencing, feedback, and pedagogical pathways, thereby mediating the relationship between students and teachers. While both elements may enhance personalisation and access, they also intensify datafication and surveillance, potentially constraining student autonomy and professional agency if left outside clear governance arrangements.
Figure 1 presents a stylised hybrid model of education governance prior to the large-scale introduction of AI. The horizontal axis represents the degree of centralisation (centralisation–decentralisation), while the vertical axis captures the mode of control (regulation–autonomy). Core elements in the curriculum domain, such as curriculum frameworks and system-level standards, are located in the centralisation–regulation quadrant, reflecting national steering and accountability functions. Domains, such as school organisation, school leadership, teachers, and pedagogy, are positioned closer to decentralised but still regulated spaces, indicating local discretion exercised within nationally defined rules. Figure 1 serves as an analytical baseline, illustrating how authority and responsibility were conventionally distributed in education systems before AI introduced new cross-cutting forms of influence.
Distinguishing analytically between AI content and AI tutors allows for a more precise examination of how different AI functions redistribute authority and responsibility across levels of education governance, as visualised in Figure 2. Figure 2 extends the baseline governance model by introducing AI as a systemic and transversal actor that cuts across the centralisation–decentralisation and regulation–autonomy dimensions. Two analytically distinct but interrelated elements are highlighted. Foundational AI infrastructures (e.g., AI models), AI content, and AI tutors are positioned closer to the centralisation–regulation quadrant due to their concentration among a limited number of large technology firms and platform providers that control model development, computational resources, and data infrastructures, and whose operations are shaped by legal, technical, and data standards. At the same time, their effects materialise locally in classrooms, reshaping pedagogy and student experience. Figure 2 visualises this structural mismatch between centralised development and decentralised use, clarifying why traditional hybrid governance arrangements are insufficient in AI-mediated education systems.
2 Education Governance in the Context of Digital and AI Transformation
2.1 Classical Models of Education Governance
Education governance has traditionally been conceptualised through the distribution of authority, responsibility, and accountability across different levels of the education system. Classical models are commonly described along a spectrum ranging from centralised to decentralised governance. In centralised systems, national authorities retain primary control over curriculum standards, assessment regimes, teacher regulation, and funding mechanisms, aiming to ensure coherence, equity, and uniform minimum standards. In decentralised arrangements, greater decision-making power is delegated to regional authorities, municipalities, or individual schools, enabling responsiveness to school organisation, school leadership, teachers, and pedagogy.
Most contemporary education systems operate through reconfigured hybrid governance models that combine elements of both approaches. Central authorities typically define overarching policy objectives, legal frameworks, curriculum standards, and accountability mechanisms, while local actors—such as school leaders and teachers—exercise discretion over pedagogical implementation, organisational practices, and day-to-day decision-making (
OECD, 2015). Within these models, professional autonomy is balanced against system-level steering through regulated forms of decentralisation.
Crucially, classical governance models were developed in school contexts where technologies functioned primarily as administrative supports or instructional aids, rather than as active decision-making actors. Authority was assumed to reside within identifiable school actors and accountability mechanisms were designed accordingly. This baseline assumption provides an essential point of reference for understanding why the integration of AI—capable of generating content, shaping pedagogical pathways, and mediating learning processes—poses a fundamental challenge to the established governance arrangements.
2.2 Digitalisation, Platformisation, and Datafication of Education
Before the widespread adoption of AI, digitalisation had already begun to reshape education governance through the expansion of digital platforms, AI infrastructures, and performance—monitoring systems. Research on the platformisation of education highlights how digital technologies increasingly mediate teaching, learning, and management by collecting, processing, and analysing large volumes of educational data. These developments have been associated with new forms of datafication, whereby educational processes, learner behaviour, and professional practice are translated into quantifiable indicators used for monitoring, comparison, and intervention.
Critical scholarship has shown that platform-based governance tends to reconfigure power relations within education systems. Digital platforms often centralise control over data standards, analytics, and system architectures, while simultaneously decentralising responsibility for implementation and outcomes to school leaders and teachers. This dynamic has been linked to the intensification of performativity, continuous monitoring, and audit cultures, as well as to the growing influence of private actors in shaping educational priorities and practices. From a governance perspective, platformisation thus represents an intermediate stage in which digital technologies begin to strain classical hybrid models without fully displacing their school logic.
Concerns related to surveillance and loss of autonomy have been central in this paper. AI systems enable continuous monitoring of students and teachers, raising questions about privacy, professional discretion, and the narrowing of pedagogical choice. As Filgueiras (
2024) argued, datafication may foster homogenised learning processes and standardised behavioural expectations, as algorithmic systems privilege dominant patterns embedded in data over contextual and plural forms of knowledge. However, while the work provides important insights into the socio-technical consequences of digitalisation, it largely treats governance as an implicit backdrop rather than as an explicit object of analysis. As a result, the implications of these developments for the redistribution of authority, responsibility, and accountability across education systems remain insufficiently theorised.
2.3 AI as a Systemic and Transversal Actor
The integration of AI marks a qualitative shift from earlier phases of digitalisation and platformisation, as AI systems increasingly function as systemic governance-relevant components rather than as neutral or supportive technologies. In this paper, referring to AI as an “actor” does not imply intentional agency or independent decision-making capacity, but denotes the fact that algorithmic rules, optimised objectives, and data-driven feedback loops embedded in AI systems exert durable and consequential influence on educational decision-making. Unlike conventional digital tools, AI systems may move beyond mediation by participating in educational processes through generated content, adaptive recommendations, and automated feedback, as observed in emerging classroom-facing applications reported in recent policy documents and sector analyses. This active role challenges the foundational governance assumption that authority and responsibility are exercised exclusively by identifiable human and school actors. Illustratively, recent international policy frameworks—such as the OECD guidance on AI in education and the European Union AI act—explicitly recognise AI systems used in education as objects of governance requiring differentiated oversight, risk classification, and accountability arrangements, underscoring that these issues are no longer merely hypothetical but are already being addressed at policy level.
From a governance perspective, AI introduces a systemic and transversal logic that cuts across the established divisions between centralisation–regulation and decentralisation–autonomy. The term “transversal” is used here in a precise sense to describe how AI systems simultaneously affect decision-making at multiple levels of the education system—policy design, technological development, and pedagogical enactment—without being fully governed at any single level. On the one hand, the development and operation of foundational AI infrastructures—such as large language models, data platforms, and algorithmic optimisation systems—are highly centralised, often controlled by a small number of organisations with the technical capacity, data access, and infrastructural resources required to develop and deploy such systems. These infrastructures depend on access to large-scale data resources, advanced computational capacities, and regulatory environments that are typically shaped at national or supranational levels. On the other hand, the effects of these systems materialise locally, within classrooms and schools, where AI-mediated tools influence everyday pedagogical decisions, learner experiences, and professional practices.
This structural mismatch between centralised development and decentralised enactment places new strain on traditional hybrid governance arrangements. Decisions embedded in AI systems—such as how content is generated, how learning pathways are prioritised, or how performance is evaluated—are often made outside the formal education system, yet they directly shape teaching and learning. As a result, authority becomes distributed across human and algorithmic actors, accountability chains become blurred, and existing regulatory mechanisms struggle to address the opacity, adaptivity, and scalability of AI-driven decision-making.
Conceptualising AI as a systemic and transversal actor helps to clarify why governance challenges associated with AI cannot be reduced to issues of implementation, ethics, or professional development alone. Instead, AI reshapes the architecture of education governance itself by redistributing decision-making power, redefining professional roles, and altering the conditions under which autonomy and accountability are exercised. This perspective provides the analytical foundation for examining how hybrid governance models must be reconfigured to respond to AI-mediated education systems.
2.4 Risks of Homogenisation, Surveillance, Accountability, and Erosion of Agency
A central concern in critical scholarship on AI in education is the risk that AI systems may, under certain conditions, contribute to the homogenisation of learning processes and the erosion of student and teacher agency (
Filgueiras, 2024;
Selwyn, 2019;
Williamson, 2017). While AI-enabled personalisation is often presented as a means of tailoring education to individual needs, algorithmic adaptation frequently relies on pattern recognition and optimisation logics that privilege dominant data trends. As a result, learning pathways may converge toward standardised trajectories, narrowing epistemic diversity and limiting opportunities for divergent, exploratory, or critical forms of learning.
These dynamics are closely linked to intensified surveillance (
Williamson & Hogan, 2020;
Zuboff, 2019). AI systems typically depend on continuous data collection regarding learner behaviour, performance, and interaction patterns to generate predictions and recommendations. From a governance perspective, this expands monitoring beyond traditional assessment into ongoing forms of behavioural and cognitive tracking. Such practices raise significant concerns regarding privacy, proportionality, and the normalisation of surveillance in educational settings, particularly when foundational AI infrastructures are controlled by external stakeholders rather than public educational authorities.
The implications for student and teacher agency are substantial (
Selwyn, 2016). For students, pervasive algorithmic guidance may reduce opportunities to exercise judgment, make mistakes, and develop metacognitive awareness, thereby undermining intellectual autonomy. For teachers, AI-mediated recommendations and analytics can subtly reframe professional judgment, shifting authority from pedagogical expertise toward algorithmic outputs. When combined with accountability pressures and performance metrics, this dynamic risks transforming AI from a supportive tool into a mechanism of indirect control.
Crucially, these risks cannot be understood solely as ethical or pedagogical issues. They are governance problems that stem from how decision-making power, oversight, and responsibility are allocated within AI-mediated education systems. Without explicit governance frameworks that define limits on surveillance, protect professional discretion, and preserve spaces for human judgment, AI integration may unintentionally reinforce standardisation, dependency, and asymmetrical power relations. Recognising these risks is therefore essential for designing hybrid governance models capable of balancing innovation with autonomy, diversity, and democracy.
2.5 Governance Gap and the Need for Reconfigured Hybrid Governance Models
While classical governance models provide a useful baseline for understanding the distribution of authority and accountability, and while critical scholarship has extensively documented the risks associated with digitalisation, platformisation, and datafication, these strands of research remain insufficiently integrated. In particular, existing frameworks struggle to account for AI systems that operate simultaneously as centrally developed infrastructures and locally enacted pedagogical agents.
This gap manifests in three ways. First, regulatory and policy instruments are often oriented toward either system-level oversight or school-level autonomy, without addressing how algorithmic decision-making redistributes power across these levels. Second, accountability mechanisms remain largely designed for human actors and institutions, leaving limited capacity to govern opaque, adaptive, and scalable AI systems. Third, professional autonomy and student agency are frequently treated as implementation concerns rather than as core governance principles that require explicit school protection.
Addressing these challenges requires reconfigured hybrid governance models rather than their abandonment. Such reconfiguration entails clarifying which AI-related decisions must be centralised—such as data governance standards, transparency requirements, and accountability frameworks—and which should remain decentralised, including pedagogical judgment, contextual adaptation, and instructional choices. It also requires recognising AI as a systemic and transversal actor whose influence must be anticipated, regulated, and monitored across education systems.
By articulating this governance gap and proposing a conceptual lens through which it can be addressed, this paper positions hybrid governance not as a static compromise between centralisation and decentralisation, but as a dynamic and adaptive framework capable of responding to the structural challenges introduced by AI-mediated education systems. While this analysis adopts a system-level perspective, the governance challenges associated with AI are likely to manifest differently across educational sectors and governance traditions—for example between compulsory education and higher education, or between highly centralised and more decentralised systems—depending on school capacity, regulatory maturity, and resource availability. This synthesis provides the foundation for the subsequent discussion of governance mechanisms, professional capacities, and policy implications in the sections that follow.
3 Discussion: Rethinking Education Purpose and Governance in the Age of AI
Building on the conceptual framework and literature reviewed in the previous section, this discussion shifts from analytical mapping to interpretative synthesis. It examines how the governance gap identified above manifests in concrete tensions between regulation and autonomy, innovation and standardisation, and efficiency and democratic educational values. Rather than treating AI integration as a technical or managerial challenge, the discussion foregrounds governance as a normative question: how education systems can retain human-centred purposes, professional agency, and student autonomy while responding to the structural transformations introduced by AI. To sharpen analytical focus, the discussion is structured around three interrelated governance themes, including (1) redistribution of authority and decision-making in AI-mediated education; (2) accountability, transparency, and human oversight; and (3) preservation of educational purpose, professional agency, and student autonomy, which are central to the reconfigured hybrid governance models in AI-mediated education systems.
3.1 Redistribution of Authority and Decision-Making in AI-Mediated Education
The introduction of AI systems into education has the potential to redistribute authority across multiple actors and levels of governance, a shift illustrated in Figure 2 by the transversal positioning of foundational AI infrastructures (e.g., AI models), AI content, and AI tutors across traditional centralisation–decentralisation and regulation–autonomy domains. Decisions that were previously made within ministries, schools, or classrooms are increasingly embedded in algorithmic systems developed by external stakeholders, often outside traditional public accountability structures. This redistribution challenges established assumptions about who governs educational processes and on what basis decisions are legitimised.
In reconfigured hybrid governance systems, authority has historically been shared between central bodies responsible for regulation and local actors responsible for implementation. Figure 2 makes visible how AI reconfigures this balance by inserting algorithmic decision-making between political intent and pedagogical practice, thereby reshaping both vertical relations between systemic levels and horizontal relations involving technology providers and foundational AI infrastructures. As a result, governance must address not only vertical relations between levels of the education system, but also horizontal relations involving technology providers, foundational AI infrastructures, and algorithmic systems that shape educational outcomes.
3.2 Accountability, Transparency, and Human Oversight
The redistribution of authority introduced by AI raises acute questions of accountability. Existing accountability mechanisms in education are largely designed to evaluate human actors and school performance, relying on transparency, responsibility, and traceability of decision-making. AI systems, by contrast, often operate through opaque models, adaptive learning processes, and continuous updates, complicating attribution of responsibility.
From a governance perspective, accountability in AI-mediated education requires explicit mechanisms for accountability, transparency, and human oversight, as also emphasised in international policy guidance for education systems (
OECD, 2023;
UNESCO, 2021). This includes clarity about how algorithmic recommendations are generated, how data are collected and used, and who bears responsibility for errors, bias, or unintended consequences. Without such mechanisms, hybrid governance systems risk delegating authority to systems that cannot be meaningfully scrutinised or contested within existing education systems.
3.3 Preservation of Educational Purpose, Professional Agency, and Student Autonomy
Beyond school design, AI integration raises fundamental questions about the purpose of education and the values that governance arrangements are meant to protect. Education systems are not solely instruments of efficiency or optimisation; they are normative institutions tasked with fostering intellectual autonomy, critical judgment, and democratic participation.
The reconfigured hybrid governance models must therefore ensure that AI systems support rather than undermine professional agency and student autonomy, a concern echoed in recent work on learner empowerment and future-ready education systems (
Gašević et al., 2023;
OECD, 2024). This requires safeguarding spaces for human judgment, pedagogical discretion, and ethical reflection, even as AI systems provide new forms of support and personalisation. Governance arrangements that prioritise efficiency or performance metrics without protecting these values risk narrowing educational aims and reinforcing standardisation. These three themes highlight that the challenge of AI in education governance is not whether to adopt AI, but how to govern it in ways that align technological innovation with public education purposes.
4 Policy Recommendations: Operationalising Reconfigured Hybrid Governance in AI-Mediated Education
4.1 Clarifying Authority and Regulating AI as a Systemic andTransversal Actor
First, mandate pre-deployment audits and public registries. AI systems intended for classroom use should undergo independent audits addressing safety, bias, data protection, and pedagogical alignment. Approved systems and versions should be listed in publicly accessible registries, including clear procedures for incident reporting and rollback. Second, adopt model procurement frameworks with exit rights. Public procurement should require interoperability, data portability, open technical standards, and explicit exit clauses to prevent school lock-in to specific platforms. Third, regulate foundational AI infrastructures as high-impact systems. Large-scale models, educational platforms, and generative systems that shape curriculum delivery and assessment should be subject to enhanced regulatory oversight comparable to other high-risk public-sector technologies.
4.2 Redesigning Accountability, Transparency, and Human Oversight Mechanisms
First, enforce transparency-by-design requirements. Classroom-facing AI systems should provide clear documentation, including model cards, risk disclosures, evaluation protocols, and—where feasible—information on training data provenance and known limitations.
Second, institutionalise human-in-the-loop governance. School leaders and teachers must retain final decision-making authority over pedagogical and assessment-related uses of AI, with explicit rights to override or reject algorithmic recommendations.
Third, establish rapid evidence and monitor the loops. Given the pace of AI system updates, traditional evaluation cycles are insufficient. Policymakers should support embedded, rapid-cycle evaluations, using shared rubrics and light reporting to inform ongoing approval and oversight decisions.
4.3 Safeguarding Educational Purpose, Professional Agency, and Student Autonomy
First, guarantee teacher decision rights and protect professional judgments. Governance frameworks should codify teachers’ authority to adapt, contextualise, or decline AI-supported instructional practices without punitive accountability consequences.
Second, make AI-related professional development a structural requirement. Continuous, practice-oriented professional learning on AI should be embedded within teachers’ career pathways, with centrally defined quality standards and locally adaptable delivery models.
Third, embed AI and data literacy as core curricular elements. Students should develop not only technical familiarity with AI systems but also critical understanding of algorithmic decision-making, data ethics, and the societal implications of automation.
Fourth, prevent homogenisation and excessive surveillance. Policies should mandate data minimisation, prohibit non-educational profiling, ensure opt-out rights with non-inferior learning alternatives, and promote pedagogical diversity in AI-supported learning environments.
The aforementioned four policy recommendations illustrate how reconfigured hybrid governance models can translate analytical insights into actionable school arrangements. By aligning authority, accountability, and agency with the realities of AI-mediated education, education systems can harness the benefits of AI while safeguarding public values and education purposes.
5 Limitations and Directions for Future Research
This study has three limitations that should be acknowledged when interpreting its findings. First, the analysis is primarily conceptual and theoretical in nature. While this approach enables a systematic examination of governance dynamics and the development of an integrative analytical framework, it does not provide empirical validation through case studies, comparative data, or longitudinal evidence. As a result, the applicability of the proposed governance framework may vary across different national, school, and regulatory contexts.
Second, the discussion focuses on education governance at a systemic level and does not examine in detail how AI governance is negotiated and enacted within individual schools, classrooms, or teacher communities. Future research could complement this macro-level perspective with meso- and micro-level studies that explore how governance principles are translated into everyday practices, professional routines, and decision-making processes.
Third, the rapid pace of AI development presents an inherent limitation. Governance arrangements analysed in this study may be affected by emerging AI capabilities, regulatory responses, and market dynamics that evolve beyond the temporal scope of the current analysis. Ongoing research will therefore be necessary to assess how reconfigured hybrid governance models adapt over time in response to technological change.
Building on these limitations, four directions for future research can be identified. First, comparative studies across education systems could examine how different governance traditions shape responses to AI integration and how regulatory approaches influence outcomes related to educational equity, autonomy, and quality. Second, longitudinal research could investigate how governance arrangements evolve as AI systems become more deeply embedded in educational infrastructures. Third, empirical research focusing on specific AI applications, such as generative content tools or AI tutoring systems, could provide finer-grained insights into the governance challenges associated with distinct technological functions. Fourth, future research should further explore the normative dimensions of AI governance in education, including questions of democratic participation, public accountability, and the role of students and teachers in shaping the governance of AI-enabled systems. Such work would strengthen the empirical and normative foundations of reconfigured hybrid governance models and support the development of education policies that are both technologically responsive and democratically grounded.
6 Conclusions
This paper has argued that the rapid diffusion of AI in education represents not merely a technological innovation, but a structural challenge to established models of education governance. By conceptualising AI as a systemic and transversal actor, the study has shown how AI disrupts traditional distributions of authority, blurs accountability relationships, and reshapes the conditions under which professional agency and student autonomy are exercised. Existing reconfigured hybrid governance models, designed for earlier phases of digitalisation, are therefore increasingly misaligned with the realities of AI-mediated education systems.
Through a structured review of governance theory, digitalisation and platformisation scholarship, and critical analyses of AI-related risks, the paper identified a persistent governance gap. While substantial attention has been paid to ethical, pedagogical, and technical dimensions of AI in education, questions of governance—particularly concerning who decides, who is accountable, and how public educational values are protected—remain insufficiently theorised. Addressing this gap requires moving beyond fragmented responses toward an integrated governance framework capable of engaging with AI’s cross-cutting effects across system levels.
The reconfigured hybrid governance model proposed in this study offers such a framework. Rather than abandoning the balance between central steering and local autonomy, it emphasises the need to recalibrate this balance in light of AI’s centralised development and decentralised enactment. By distinguishing foundational AI infrastructures (e.g., AI models), AI content, and AI tutors, the model clarifies how different AI functions redistribute authority and responsibility, and why governance responses must be differentiated rather than uniform.
Importantly, the analysis underscores that effective AI governance in education is not solely a matter of risk mitigation or efficiency enhancement. It is fundamentally a normative project concerned with safeguarding education purpose, democratic accountability, and human agency. The policy recommendations outlined in this study demonstrate how governance principles can be operationalised through concrete school arrangements that align innovation with public values.
As education systems continue to experiment with and selectively adopt AI technologies, governance choices made today will shape not only how AI is used, but what education is for. By foregrounding governance as a central analytical and policy concern, this paper contributes to ongoing debates on how education systems can remain adaptive and innovative while preserving their democratic and human-centred foundations in the age of AI.
The Author. This article is published with open access at link.springer.com and journal.hep.com.cn