A Systematic Review of the Dark Side of AI in Education: A Critical Social Theory Perspective

Kofi Koranteng Adu , Yaw Owusu-Agyeman

Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (2) : 10

PDF (1029KB)
Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (2) :10 DOI: 10.1007/s44366-026-0084-0
REVIEW ARTICLE
A Systematic Review of the Dark Side of AI in Education: A Critical Social Theory Perspective
Author information +
History +
PDF (1029KB)

Abstract

Viewed as a threat to academic integrity, several questions have arisen about the dark side of AI. While AI offers several opportunities for teaching and learning, questions have also arisen regarding its impact on the future of education. This paper offers a critical perspective on the dark side of AI in education by drawing on critical social theory. It examines how AI can deepen the processes of educational inequality, power relations, and cultural identity. The findings illustrate that differential levels of economic development, digital infrastructure, cultural norms, and technological capacity significantly influence the integration and impact of AI across global education systems. To counter these risks, this paper calls for AI developers to adopt an inclusive design from the start of the application to ensure that these tools are accessible, adaptable, and affordable to the economies of the Global South. The paper further highlights the urgency of incorporating African voices, values, ethics, and worldviews into global AI governance conversations. Africa and other developing countries must not merely be sites of AI deployment but key actors in shaping the educational futures these technologies enable. Their expertise is essential to confronting colonial legacies in technological innovation and ensuring that AI advances democratic empowerment rather than digital dependency.

Graphical abstract

Keywords

AI governance / dark side of AI / Africa / critical theory

Cite this article

Download citation ▾
Kofi Koranteng Adu, Yaw Owusu-Agyeman. A Systematic Review of the Dark Side of AI in Education: A Critical Social Theory Perspective. Frontiers of Digital Education, 2026, 3(2): 10 DOI:10.1007/s44366-026-0084-0

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

There is no doubt that AI has had a significant impact on almost every facet of human life and has become a key technology in contemporary society. From natural language processing to image and pattern recognition, AI techniques have permeated today’s infrastructure and many economies worldwide (Lindgren, 2023). Within academic circles, generative AI (GenAI) has been used for speech writing, grading, translation, facial recognition, and algorithmic models (Zawacki-Richter et al., 2019). This has been possible because GenAI systems are designed to generate content (e.g., texts, images, audio, simulations, video, and codes) from the data on which they are trained (Xie et al., 2023). The advent of AI technologies in higher education was anticipated to herald a paradigm shift, promising unparalleled opportunities for personalized learning and knowledge acquisition (Fowler, 2023). However, despite all the excitement and hype, concerns have been raised about GenAI posing a threat to academic integrity (Stokel-Walker, 2022).

Informed by critical social theory, the paper argues that there is a need for social critical studies on AI by researchers across various disciplines, particularly humanities and social sciences. This need arises because in contrast to predominantly technology-driven sectors where critique often serves to legitimize systems and accelerate the adoption of new tools, critical AI scholarship in the humanities and social sciences disciplines engages more deeply and broadly with questions of power, ideology, ethics, inequality, and social consequences (Lindgren, 2023). For example, there are concerns about AI use, including issues related to plagiarism, classroom activities, authentication, academic integrity (Crawford et al., 2023), and the changing roles of teachers and students (Sidorkin, 2025). Other narratives have revealed serious concerns about the legal and ethical implications of using AI in anxieties about mistakenly labeling plagiarized work as good work, students completing assignments quickly and nearly effortlessly, and creating an unfair learning environment that may demoralize students with academic integrity (Daniel et al., 2025; Mahande et al., 2026). Likewise, AI-generated answers observed in national and international examinations undermine the integrity and authenticity of the grades that students earn (Xie et al., 2023). One such incident was reported in Ghana in 2023, when the results of candidates from 235 schools were withheld for allegedly using AI-generated answers during the 2023 West African Senior School Certificate Examination (Ghana Education News Editorial Team, 2023).

While AI promises customized learning experiences, its potential to perpetuate biases and disadvantage specific demographic cohorts is worrying (Lazarus et al., 2024). The concern is whether the tools needed to detect people using viral chatbots, such as Turnitin, Unicheck, and Noplag, can guarantee academic integrity (Eke, 2023). As the capabilities of technology increase, the threat of human obsolescence looms within academic circles, raising concerns about whose interests GenAI technology actually serve. Various scholars and organizations have expressed their reservations about this looming threat, and in response, the European countries produced the first-ever regulations for AI in 2021. The regulations sought to decrease the risks and threats to humans while acknowledging the prospects of an increasingly “smart” world. There is a need for balance between human intuition and machine precision, which requires a careful calibration to maximize the full potential of AI. Using critical social theory as a theoretical framework, this paper takes a panoramic view of the dark side of AI in education in the Global South. Such research is timely given the heightened concerns about the unintended consequences of AI in education.

This systematic review makes an important contribution to the AI in education by presenting a critical theoretical framework, dark side of AI, grounded in Marcuse’s (1964) philosophy to examine how AI reinforces existing disparities in education in Africa. The paper further proposes what we refer to as a context-sensitive approach to AI governance and design systems that integrate Africa, ethics, and epistemologies that depart from the narrative of technological determinism to inclusive and culturally responsive innovation. The next section addresses critical social theory and Marcuse’s (1964) critical theory of technology and AI, which provides a theoretical framework. It then further examines critical social theory and AI in a broad sense.

2 Critical Social Theory

The early Frankfurt School addressed the twentieth century’s epochal crisis of modernity via its critique of technology (Feenberg, 2017). Indeed, the original Frankfurt School’s view of technology was based on its critique of modernity and future possibilities (Delanty & Harris, 2021). This is because it centers on instrumental rationality and related technocratic forms of power. The Frankfurt School’s critique of technology has garnered significant attention in modern scholarly studies (Delanty & Harris, 2021). Using Marxist theory, Frankfurt School scholars viewed technology not only as a machinery but also as a tool that complements labor. Moreover, they recognized that technology has its own complexities and is not reducible to capitalism (Delanty & Harris, 2021). Other scholars, such as Adorno and Horkheimer (Santos, 2010), from the first generation of the Frankfurt School, criticized technology and helped coin the term of critical social theory (Fernandes, 2021). It is instructive to note that while critical social theory traces its origin to the mid-twentieth century, it cannot be confined or restricted to the historical context in which it was developed. It remains relevant and in vogue due to its capacity for adaptation and self-criticism. This paper offers a critical perspective on the dark side of AI in education by drawing on Marcuse’s critical theory of technology. It examines how AI can deepen educational inequality, power relations, and cultural identity.

2.1 Marcuse’s Critical Theory of Technology and AI

We adopt Marcuse’s (1964) critical theory of technology, a framework of the first generation of the Frankfurt School as the theoretical underpinning of this paper. Marcuse’s (1964) work, especially in One-dimensional man: Studies in the ideology of advanced industrial society, critiques the way technology in capitalist society tends to normalize inequality, reinforce the existing power structures, and suppress critical consciousness. He emphasized that technological advancement is not neutral; it often embodies the values, priorities, and power interests of dominant groups. To explore how the dark side of AI could affect teaching and learning in higher education, we examine different dimensions of Marcuse’s work in terms of technological inequality and marginalization, dehumanization and academic integrity, power and ideology, and the need for liberation through critical awareness.

First, regarding technological inequality and marginalization, the paper argues that AI exacerbates educational inequality and excludes African cultural identity from most AI tools which is central to Marcuse’s (1964) argument that technology can serve as a tool of domination when not developed inclusively.

Second, due to dehumanization and academic integrity, many African higher education institutions are struggling to adopt and use AI. Marcuse warned that technological systems may prioritize efficiency over human values. This paper’s concerns about AI undermining academic integrity and sidelining educators echo this tension between instrumental rationality and humanistic education. A recent study revealed that the challenges of AI use in African educational systems include the absence of a regulatory framework, limited African contexts, and challenges associated with building inclusive partnerships that are dependent on a strong AI community and co-creation (Kiemde & Kora, 2022).

Third, power and ideology influence African higher education environment and how teaching and learning are transformed. Just as Marcuse criticized the ideological use of technology to entrench capitalist control (Feenberg, 2023), this paper argues that AI design and deployment reflect ideological agendas that often ignore Global South perspectives and the needs of learners and academics.

Fourth, the need for liberation through critical awareness has become even more important in recent years due to the continuous marginalization of individuals who cannot afford digital learning tools, internet connectivity, and electricity to power electronic devices in some cases. Marcuse advocated for critical consciousness as a path to liberation (Thompson, 2025). This paper argues that inclusive design, local agency, and ethical AI policy are important in educational settings because they mirror the emancipatory goals such as the creation of a new sensibility and aesthetic dimension highlighted by Marcuse. By explicitly applying Marcuse’s critical theory of technology in the findings section, we examine how AI serves not only as a technological tool but also as a sociopolitical element shaped by the dynamics of education, power, and culture in Africa. Marcuse’s work has been critiqued on the basis that “radical subjectivity is an achievement of self-knowing rather than a form of subjectivity that is emergent from the social dialectic of technological civilization” (Thompson, 2025).

Technologies are influenced by economic, political, and discursive factors just as much as they are by practical technological advancement (Jobin & Katzenbach, 2023). These factors among others contribute to the challenges individuals face in the adoption and use of technology rather than systemic issues. AI has been defined as the ability of machines to learn from experiences, adapt to new inputs, or undertake tasks that are often performed by human minds, such as speech recognition, visual perception, language translation, and decision-making (Guenduez & Mettler, 2023; Pereira et al., 2023). However, this narrative has a dark side that needs to be explored. While various studies have highlighted the importance of examining the critical success factors of the implementation and use of AI (Leander & Burriss, 2020; Merhi, 2023), few studies have examined the challenges associated with the application of AI and how the development and use of AI tools could deepen inequality and affect other facets of human activities, especially in Global South economies. AI systems fall short of expectations, especially if they are not successfully implemented. If these systems fail, all resources and efforts are wasted, and institutions are unable to take advantage of the benefits mentioned above. Examining the crucial key elements influencing the effective deployment of AI systems (Merhi, 2023) and the issues that arise from their development and application has become crucial.

Although AI has the potential to improve research, pedagogy, administration, student experience, and student support, a critical approach is required due to concerns about academic integrity, job displacement, unconscious biases, environmental sustainability, commercialization, and regulatory shortcomings (Rudolph et al., 2024). It has been argued that the emergence of fully developed AI may signal the extinction of humanity (Cellan-Jones, 2014). Owing to their slow biological evolution, humans will eventually become extinct because they cannot compete with complex AI tools (Lazarus et al., 2024). In addition, the emphasis on the need to develop an atmosphere of responsible innovation and informed AI use, as well as the promotion of critical AI literacy among teachers and students, has become important (Rudolph et al., 2024).

3 Methodology

The paper adopted the preferred reporting items for systematic reviews and meta-analyses (PRISMA) approach to ensure transparency, rigor, and replicability throughout the screening process. Accordingly, a comprehensive search strategy was carried out in major 4 scholarly databases, including Scopus, Web of Science, IEEE Xplore, and Springer Nature Link. We also included reputable institutional sources, focusing on literature addressing AI, critical social theory, governance, and implications for education, particularly in the Global South. In reviewing the relevant literature, we adopted a structured approach, focusing on the current state of research on “the dark side of AI in the Global South.” All these databases were searched using the keyword “the dark side of AI in Africa.” The titles and abstracts of retrieved articles were screened. During the identification stage, 108 records were retrieved from electronic databases. An additional 18 records were identified through handsearching, citation chaining, and manual exploration of organizational reports and grey literature from UNESCO and Oxford Insights. Following the removal of 11 duplicates, 115 unique records (articles and book chapters) remained for screening.

The screening stage involved a title and abstract evaluation to assess relevance to the core research. This stage resulted in the exclusion of 47 records that were either unrelated to education, focused solely on technical development without social sciences relevance, or provided insufficient scholarly grounding.

A total of 68 full-text articles, book chapters, and conference proceedings in the eligibility stage. These were assessed in detail according to predefined inclusion criteria (1) direct engagement with AI adoption or governance; (2) theoretical or empirical contributions to issues of equity, power, or inclusion; and (3) contextual relevance to the educational future of the Global South. Twenty articles, book chapters, and conference proceedings were excluded at this stage due to methodological weaknesses, lack of primary analyses, or tangential relevance to the review’s conceptual framework.

Finally, 48 articles, 2 technical reports, and 4 book chapters met the inclusion criteria and were retained for the qualitative synthesis, informing the thematic analyses and conceptual insights presented in this systematic review, with the oldest publication having been published in 2000.

The included studies represent a diverse intersection of global and regional scholarship, integrating critical perspectives on social structures and economic issues in education, technological complexity, infrastructures, institutional structures and policy frameworks, power relations, cultural identities, educational inequalities, and AI and governance in education. This review not only fills a gap in the literature, but also provides actionable insights for policymakers and practitioners seeking to implement AI solutions tailored to Africa’s needs.

Figure 1 visually sketches the paper selection process. Collectively, the PRISMA-guided process strengthened the review’s methodological robustness by ensuring balanced representation, minimizing selection bias, and grounding the paper in a well-defined body of peer-reviewed, authoritative sources.

4 Findings

In this section, we present the findings of our paper on the unintended consequences of AI in education, which we also refer to as the dark side of AI with a particular emphasis on the Global South. Critical social theory was used to examine the negative effects of AI, emphasizing the potential for these technologies to worsen rather than resolve the existing issues if measures are not taken to address them. More specifically, this section examines 9 issues that could be considered the dark side of AI in education in Africa from a critical social theory perspective. Nine issues are: (1) AI and governance in education; (2) educational inequality; (3) infrastructure; (4) cultural identity; (5) power relations; (6) technological complexity; (7) social structures and economic issues in education; (8) ideology; and (9) institutional structures and policy frameworks.

4.1 AI and Governance in Education

The use of AI across many governments of the world has recently gained prominence, as it has been used to improve public administration, corporate governance, and citizen–government interaction and participation (Montoya & Rivas, 2019; Ojo et al., 2019; Toll et al., 2019). In public administration, AI has been used to forecast health needs, predict domestic violence, and identify cases of theft and crime prevention (König, 2023). At the same time, crime preventive agencies, intelligence agencies, and police departments around the world are embracing AI-powered iris scanners and license plate trackers to extend their online powers (Dauvergne, 2021). Within the political realm, AI has been used to predict election results, allocate national resources, and accommodate citizen preferences (Engin & Treleaven, 2019; Margetts & Dorobantu, 2019). For example, during elections in the United States and Europe, political parties and analytics firms have used AI-driven models to predict voter turnout and electoral outcomes, enabling targeted campaign strategies and resource deployment. While these tools enhance predictive accuracy, they also raise concerns about electoral manipulation and algorithmic bias.

While providing an exhaustive list of the benefits of AI in government is beyond the scope of this paper, its social implications are significant and far-reaching, particularly in a continent where democratic institutions are weak, human rights abuses are rife, and corruption is endemic (Adu, 2018). African governments, including those of Ethiopia and Zimbabwe, have reportedly deployed AI as instruments of surveillance and social control, raising significant concerns about civil liberties, governance, and democratic accountability (Gwagwa et al., 2020). AI deployments in Foreign AI companies have been accused of using false African identities as marketing tools to raise capital and eventually cash out (Pilling & Coulton, 2019).

4.2 Educational Inequality

AI has the capacity to expedite the achievement of global education objectives through the removal of barriers to education. Its capacity to remove barriers can be observed in automatic grading, personalized learning, plagiarism detection, and feedback provision (Owoc et al., 2021). Unsurprisingly, AI has become the epicenter of the higher education landscape, having proven to be difficult to exterminate (Ahmad et al., 2023). However, from the perspective of critical social theory, AI in education shows that there is a dark side of AI that ought not to be ignored. Reported cases of plagiarism, automated generation of essays, and real-time assistance during examinations have sparked global conversations, debates, and controversies (Tahir & Tahir, 2023), prompting educators to call for a responsible AI ecosystem within which researchers and innovators can act dutifully. Responsible AI emphasizes ethical, transparent, and accountable use of AI in a manner consistent with user expectations, organizational values, and societal laws and norms (OECD, 2019). While AI presents an enormous opportunity in the area of quality education, Sub-Saharan Africa has not been able to take full advantage in terms of its readiness.

4.3 Infrastructure

In terms of infrastructure, the reality is that Africa’s digital infrastructure and ecosystem are mostly managed by companies, such as Facebook, Google, Uber, Netflix, and Huawei (Kiemde & Kora, 2022). This explains why mobile technology internet connectivity in Africa and African data ecosystems are still at the nascent stages of the African data revolution (Garcia & Kelly, 2015). Beyond that, Africa was among the lowest scoring regions in terms of the readiness and implementation of AI (Oxford Insights and the International Development Research Centre, 2020). There are still issues of internet connectivity or software compatibility, which often affect practical and interactional experience (Segbenya et al., 2022). It is therefore important to examine how international communities can help with the technology gaps in Africa by focusing on specific local needs and problems in AI policy development, instead of using a one-size-fits-all approach that has slowed down development efforts in Africa (Arakpogun et al., 2021). Beyond Africa, structural inequality in the adoption and usage of AI are also evident in countries across Europe, Asia, and South America. The structural inequality in the adoption and usage of AI in these countries are shaped by their distinct socioeconomic, linguistic, and cultural contexts. For example, in Asia, countries such as India face persistent challenges regarding digital divides between urban and rural communities. In many rural Indian communities, individuals struggle to use AI due to infrastructural deficiencies, barriers related to affordability and low AI literacy, thereby limiting equitable access to AI-driven learning tools despite national initiatives, such as the National education policy 2020 (Ministry of Human Resource Development (Government of India), 2020), and indigenous language model development (Gupta & Kaul, 2024).

Likewise, some Southeast Asian countries are characterized by urban–rural disparities in internet connectivity and AI readiness, with rural areas showing only 55% internet penetration compared to 90% in urban centers, which further exacerbates educational inequities (Hendrian, 2025). Although regional frameworks, such as UNESCO’s AI competency framework for students, have been developed to promote responsible AI integration, fragmented governance, and uneven infrastructure continue to hinder inclusive adoption (Chiu, 2025).

In South America, AI adoption in education remains quite slow and uneven, with many initiatives centered on urban areas and private-sector projects, leaving rural schools underserved (Dodick, 2025). Countries, such as Brazil and Mexico, continue to face internet connectivity challenges and resource constraints, further deepening socioeconomic stratification and limiting the transformative potential of AI in addressing issues in the education sector (Khan et al., 2024). One of the challenges African countries face is that GenAI systems trained predominantly on Western and English-language datasets have the proclivity of displacing indigenous knowledge systems and cultural practices, as well as strengthening epistemic injustice across countries in the Global South (Kshetri, 2024). These dynamics reflect global patterns of digital neocolonialism, where technological infrastructures remain the preserve and privilege of dominant cultural narratives, while local epistemologies are marginalized (Ofosu-Asare, 2025).

While these inequities continue to dominate global countries, especially in the Global South, the expectation that AI will personalize learning and enhance educational resilience cannot be fulfilled. Structural inequalities continue to affect the adoption of AI through infrastructural deficit, linguistic exclusion, and cultural fusion. To address these challenges, countries should employ context-sensitive strategies (Kuriachan et al., 2021) that seek to integrate indigenous epistemologies, multilingual resources, and participatory governance frameworks to ensure that AI promotes educational equity and cultural inclusivity rather than preserving dependency.

4.4 Cultural Identity

The extent to which AI is developed, adopted, and regulated essentially depends on the existing sociocultural setting. African Ubuntu philosophy symbolizes an ontology with a collectivist outlook that seeks to highlight cooperation, respect for human relations, transparency to group members, and social justice (van Norren, 2023). However, the development of Ubuntu of AI is still in its embryonic stage. The application of AI differs from region to region and should be addressed within the sociocultural milieus or contexts of specific geographical settings. For instance, AI has the potential to have different social impacts based on geographical features, such as in Africa, where access to digital tools is limited. Additionally, individuals’ perceptions and understandings of how AI works in addressing disruptions and potential harm could be shaped by local sociocultural contexts (Hagerty & Rubinov, 2019). The reality from this narrative is that Africa should be factored into the conceptualization, design, development, and deployment of all AI applications. This is because the data obtained from African countries is seldom represented in training AI models (Birhane, 2021; Mohamed et al., 2020). Thus, the training models for AI do not necessarily incorporate the African context, which makes it difficult for GenAI to be used efficiently. For example, many GenAI systems perform poorly when translating African languages or localized forms of English (e.g., Ghanaian, Nigerian, or Kenyan English), as these linguistic varieties are underrepresented in training data. In other words, they lack contextual relevance, particularly with respect to infrastructural and cultural factors (Oxford Insights and the International Development Research Centre, 2020).

Buolamwini and Gebru’s (2018) study on automated facial analysis algorithms showed that there were significantly more misclassifications of dark-skinned black females than light-skinned black males. Quite apart from those conversations (globally sociocultural context) on AI appears to be dominated by the Global North thereby ignoring the very issues affected by the Global South. There is an evidence that AI lacks sufficient media coverage in Africa (Ouchchy et al., 2020). There is so much concentration of profits by the developers of AI to the extent that cultural context has been ignored. In recent years, other scholars have highlighted the relationship between AI, critical social theory, and identity (Leander & Burriss, 2020). The idea behind this emerging nexus is how identities are constructed by machines at individual level through AI’s creation of individual types (Leander & Burriss, 2020). According to Delanty and Harris (2021), one major limitation of critical social theory’s own account of technology is that it operates with the notion of technology as Technik. How can AI-generated content impose external cultural models, thereby diminishing the educational value of local knowledge?

GenAI technologies are mostly trained on Western datasets, especially Eurocentric datasets and linguistic norms, embedding assumptions about pedagogy and cultural relevance that may not be associated with the diverse educational contexts in the Global South (Nyaaba et al., 2024). This results in digital neocolonialism, where Western epistemologies dominate educational narratives and marginalize indigenous knowledge traditions and systems. For example, a recent study showed that AI-generated learning materials mostly privilege Euro–American cultural references while neglecting local languages and worldviews, thereby creating epistemic homogenization that alienates learners from the realities (Ofosu-Asare, 2025). Moreover, linguistic imperialism is strengthened, especially when GenAI systems prioritize English and other dominant Western languages, further eroding cultural diversity in educational content (Albeihi & Rice, 2025) in Global South. Unfortunately, these patterns reflect a broader Western bias in AI development, which either overtly or unconsciously excludes global perspectives systematically and deepens structural inequalities in education (Mergen et al., 2025). To address these challenges, it is important for AI developers to deliberately integrate indigenous epistemologies, multilingual resources, and participatory governance frameworks to ensure that AI fosters cultural inclusivity rather than perpetuating dependency and inequality.

4.5 Power Relations

Africa has played a minimal role thus far in the development of global AI algorithms or in formulating ethical criteria for their use (van Norren, 2023). Notwithstanding the limited role played by African researchers in AI sector, the concept of AI and power relations in African education is complex, rooted in a combination of colonial legacies, local knowledge systems, and the cultural contexts of African societies. Unequal power relations in knowledge production, such as research funding for education in Africa, is often controlled by international donor foundations and development agencies based in the Global North. Many African universities adopt curricula modeled on Euro–American standards to meet international accreditation, ranking, or donor requirements. Such unequal power relations in knowledge production have significant implications for education delivery (Crawford et al., 2021; Ndlovu-Gatsheni, 2021) across countries, institutions, and educational levels. Critical social theory focuses on how the world and, by extension, AI—are products of active social construction, which entails certain presumptions about the reality (Lindgren, 2023). AI cannot be fully value free. There are no clear-cut universal laws that can explain how AI interacts with society. Rather, society is shaped by power relations as well as a variety of historical and background factors. As a result, technology that benefits some people may be detrimental to others. Likewise, AI research has come a long way from its inception to its current use as a tool used to carry out a variety of information processing activities, such as those carried out by humans (Guenduez & Mettler, 2023).

Power relations significantly shape the development, ownership, and use of AI, through the gatekeeping roles exercised by researchers, educational institutions, technology developers, and research laboratories. A prior study highlighted a lack of transparency and extractive collection major problems associated with AI and empowerment (D’Ignazio & Bhargav, 2015). These two elements have implications for education in Africa. First problem, regarding a lack of transparency, suggests that interactions between individuals and environments are often gathered with little or no approval from the owners of the information (D’Ignazio & Bhargav, 2015). That is, participants in discussions are not aware that their actions are being recorded (Jandrić, 2019). These practices are closely related to ethics which forms a critical component of AI. Second problem, associated with AI, is extractive collection, which explains how third parties gather information that is not intended for observation or use by the individuals from whom it is gathered. Consequently, this takes away the subject’s ability to participate in the data gathering process and their chance to communicate with the collector.

4.6 Technological Complexity

Technological complexity involves the processes in which various sophisticated algorithmic approaches are used to analyze data and discussions in which highly technical terminology is used. It typifies situations in which the data gathered by individuals or groups are used to make decisions with consequences for the people providing information. The outcome of such a situation is that the subject is excluded from decisions that have an impact on them (D’Ignazio & Bhargav, 2015; Jandrić, 2019). Popenici (2022) and Rudolph et al. (2024) have argued that AI serves as a marketing tool laden with exploitation and exaggeration.

4.7 Social Structures and Economic Issues in Education

Currently, AI systems and applications used in education not only show technical deficiencies but also reproduce structural inequalities that are deeply rooted in systemic and historical injustices (Ryan et al., 2025). Madaio et al. (2022) suggested that dominant fairness paradigms that focus on algorithmic uniformity or bias mitigation are insufficient to reveal such structural inequalities and historical injustices because they discount the social and technical systems that control AI in education. The sociotechnical systems in which educational AI operates promote ideologies about the definition of knowledge and who can be considered as successful learners based on the privileged dominant cultural norms that are recognized as compared to the marginalized cultures. For instance, AI models trained on data that are historically biased enable exclusionary practices that also strengthen racialized, class-based, and gendered hierarchies. Moreover, predictive analytics for student success often program assumptions based on structural inequalities, which creates feedback systems that further disadvantage marginalized learners. To address these issues, it is important for policymakers, software developers, and education providers in the Global South to move beyond the technical fixes toward design systems and frameworks that are guided by critical social theory, ensuring that AI systems in educational settings promote inclusion and equity rather than advance historical inequities (Madaio et al., 2022).

Critical social theory seeks to address social, economic, and individual freedom issues. The application of critical social theory to AI raises fundamental questions regarding how to educate people about the negative effects of AI, such as distortion, exploitation, discrimination, and injustice (Lindgren, 2023). A recent study by Rudolph et al. (2024) highlighted the threats posed by AI, including (1) academic integrity challenges; (2) dilution of university teacher roles; (3) quality, accuracy, and ethical concerns; (4) technological colonialism and monoculturalism; (5) erosion of graduate attributes; (6) graduate employment; (7) privacy and surveillance; (8) biases; (9) sustainability; and (10) regulatory and policy challenges. Individuals in the Global South and those from low socio-economic backgrounds could be affected by the commercialization of AI tools, especially in the research and educational environment. This is because the developers of educational AI tools have access to financial assets, the technical experts or developers who support the continuous production of the AI tools, the cultural and social capital to create a robust network along the value chain, and the goodwill to sustain their businesses. Moreover, in relation to social structure, critical analysis of AI focuses on the link between the implementation and advancement of AI on the one hand and other elements of social structure—culture, discourse, race, economics, gender, and politics on the other. Today, we are also subjected to the technocratic control (Feenberg, 2017) of experts who shape our thinking, our way of life, and our understanding of society through the technology they produce for our use. In many developing countries, the introduction of AI in education has not been smooth because of the seemingly weak connection between AI and other elements of social structure.

4.8 Ideology

While critical social theory is rooted in an ideology, this paper argues that ideology in this sense is vague if it is not connected to the systems of control in the political space. This could also be linked to Bourdieu’s (2000) notion of power, which explains the spaces where power is exerted over capital or the positions from which the relative value of capital is impacted. Therefore, power and people who control technologies, such as AI, will influence the actions, beliefs, and conduct of individuals who seek to use AI. Notwithstanding the possibility of many alternative ideas and aims, some concepts related to power relations are taken for granted as the objectives and truth about AI. Hence, in AI ideology, we will often find relentless technological optimism, the belief that technological progress is an autonomous force and can save us all, and the tendency to delegate key decisions to opaque algorithms. “One of the dangers of ideology is that once dominant views and priorities have been established, they can become naturalized, and therefore, appear legitimate and unquestionable (Lindgren, 2023).” The views of Lindgren (2023) shows how ideology could deepen inequities in the use of AI especially in the Global South.

To address the ideological dimensions of AI use, students must be able to critically read a technologically mediated reality shaped by algorithmic opacity and structural ambiguity, enabling them to engage with AI systems while recognizing the power relations and values embedded within them (Bearman & Ajjawi, 2023). The role of teachers will be to instruct students to look for and assess the information produced during an AI interaction and to consider how it may be applied to enhance their academic work (Bearman & Ajjawi, 2023).

4.9 Institutional Structures and Policy Frameworks

Feenberg (2017) argued that critical social theory’s critique of determinism has political consequences for government leaders as they face significant obstacles and uncertainties in the deployment of AI (Guenduez & Mettler, 2023). Today, policymakers across the world are realizing the importance of guidelines and policy frameworks considering the proliferation of AI in energy consumption in transportation, manufacturing, finance, leisure, health and well-being, education, and tourism (Guenduez & Mettler, 2023). Therefore, several national AI policies are being developed. These include goals and visions for the state-level implementation of AI along with their limitations. Indeed, governments have a significant impact on how institutions, corporations, and societies adopt and use AI. The roles of government in the promotion of AI are fourfold: regulator, leader, enabler, and user (Guenduez & Mettler, 2023). Despite the significant need for policy guidance, there has been relatively little policy-related AI research to date. The potential and problems government leaders see in AI, as well as the paths and actions they plan to take, have not been sufficiently well studied (Guenduez & Mettler, 2023).

However, understanding the structural conditions that emerge from critical theoretical analysis can assist individuals in improving the society where they live (Lindgren, 2023). This serves as the sixth element (technological complexity) under critical social theory and AI. Lastly, critical social theory will enable individuals to understand the social implications of the development and application of AI. As frequently asserted, AI has the capacity to expedite the achievement of global education objectives through the removal of barriers to education, automation of administrative procedures, and optimization of techniques for enhancing learning results (Nemorin et al., 2023). These benefits could contribute to the reduction of the digital skills divide and make education more egalitarian and easily accessible (Nemorin et al., 2023).

5 Discussion and Conclusions

This paper examined the dark side of AI in education from a critical social theory perspective. An analysis of the reviewed literature revealed 9 important elements, which we refer to as the dark side of AI that adversely affect the adoption and use of AI in Africa. The findings also illustrate how differences in social, economic, cultural, and technological advancements in different countries affect AI use. The paper calls for AI developers to adopt an inclusive design from the start of the AI application to ensure that AI tools are accessible, adaptable, and affordable to the economies of the Global South. Countries in the Global South, particularly those in Africa, must address governance challenges and deficiencies in the ability of institutions to create the foundational elements, such as infrastructure, trained personnel, and data, necessary for the advancement of AI (Arakpogun et al., 2021). While recent research has placed a spotlight on AI in the Global North, little is known about the dark side of AI in Africa. Countries in the Global South face not only digital barriers to inclusion but also face issues around educational inequalities, infrastructure, cultural identity, and power relations, in terms of the deployment of AI. These issues need to be robustly considered and mitigated, and this paper makes a case for why and how African voices, ethics, interests, expectations, and fears should become part of the growing global discourse on AI.

However, the paper has some limitations. The paper adopted a critical social theory, drawing primarily on existing literature, policy documents, and theoretical insights rather than large-scale empirical data. While this approach allows for deep critical analysis of educational inequalities and power relations in AI and education, it limits the ability to generalize findings across all educational contexts. The paper focuses largely on African and Global South perspectives, particularly in relation to data representation, AI governance, and knowledge production. Although this focus is intentional and theoretically justified, it may limit the applicability of the findings to regions with more advanced AI infrastructures or different regulatory environments. The scope of the review was largely based on the existing literature and may not have taken full cognizance of other emerging literature on AI in Africa.

6 Implications for Policy and Practice

Education providers worldwide are increasingly recognizing the necessity of developing guidelines and frameworks to manage the rapid growth of AI across education sector (UNESCO, 2021). This recognition is important for ensuring that AI is implemented effectively and ethically. Despite the urgency for policy guidance in AI, there is a notable lack of research focusing on AI policy implications. Understanding the perspectives and planned actions of governments and providers of education across all levels regarding AI is essential for creating effective educational policies. The potential advantages of AI in education could help bridge the AI literacy gap, making learning more equitable and accessible for all students, especially in the Global South. This highlights the importance of inclusive policies that consider diverse educational needs.

Along these lines, we consider critical social theory important in helping individuals comprehend the broader social implications of AI’s development and application. This understanding is necessary to foster a more equitable educational landscape. By applying critical theoretical analysis, individuals can better grasp the structural conditions that influence society, which can lead to improvements in educational policy and practice. This paper also emphasizes the need for policymakers at the state and institutional levels to enhance their understanding and readiness regarding the integration of AI in education. Without addressing existing educational inequalities, the potential benefits of AI use will remain limited. In summary, the implications for practice and policy in the context of AI in education underscore the importance of addressing inequalities, developing comprehensive guidelines, and fostering an inclusive approach to ensure that AI benefits all learners.

References

[1]

Adu, K. K. (2018). The paradox of the right to information law in Africa.Government Information Quarterly, 35(4): 669–674

[2]

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education.Humanities and Social Sciences Communications, 10(1): 311

[3]

Albeihi, H. H. M., Rice, M. F. (2025). Generative AI and language diversity: Implications for teachers and learners.Arab World English Journal, 16(1): 43–54

[4]

Arakpogun, E. O., Elsahn, Z., Olan, F., Elsahn, F. (2021). Artificial intelligence in Africa: Challenges and opportunities. In: Hamdan, A., Hassanien, A. E., Razzaque, A., & Alareeni, B., eds. The fourth industrial revolution: Implementation of artificial intelligence for growing business success. Cham: Springer.

[5]

Bearman, M., Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence.British Journal of Educational Technology, 54(5): 1160–1173

[6]

Birhane, A. (2021). Algorithmic injustice: A relational ethics approach.Patterns, 2(2): 100205

[7]

Bourdieu, P. (2000). Pascalian meditations. Redwood City: Stanford University Press, 102–106.

[8]

Buolamwini, J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency. New York: PMLR, 77–91.

[9]

Cellan-Jones, R. (December 2, 2014). Stephen Hawking warns artificial intelligence could end mankind. Available from BBC NEWS website.

[10]

Chiu, T. K. F. (2025). AI literacy and competency: Definitions, frameworks, development and future research directions.Interactive Learning Environments, 33(5): 3225–3229

[11]

Crawford, G., Mai-Bornu, Z., Landström, K. (2021). Decolonising knowledge production on Africa: Why it’s still necessary and what can be done.Journal of the British Academy, 9(s1): 21–46

[12]

Crawford, J., Cowling, M., Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI).Journal of University Teaching and Learning Practice, 20(3): 2

[13]

Daniel, K., Msambwa, M. M., Wen, Z. (2025). Can generative AI revolutionise academic skills development in higher education? A systematic literature review.European Journal of Education, 60(1): e70036

[14]

Dauvergne, P. (2021). The globalization of artificial intelligence: Consequences for the politics of environmentalism.Globalizations, 18(2): 285–299

[15]

Delanty, G., Harris, N. (2021). Critical theory and the question of technology: The Frankfurt School revisited.Thesis Eleven, 166(1): 88–108

[16]

D’Ignazio, C., Bhargav, R. (2015). Approaches to building Big Data literacy. In: Proceedings of the Bloomberg Data for Good Exchange Conference.

[17]

Dodick, D. (2025). Localizing AIED: Moving beyond North–South narratives to serve contextual needs.AI & Society, 40(4): 2971–2981

[18]

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity.Journal of Responsible Technology, 13: 100060

[19]

Engin, Z., Treleaven, P. (2019). Algorithmic government: Automating public services and supporting civil servants in using data science technologies.The Computer Journal, 62(3): 448–460

[20]

Feenberg, A. (2017). Critical theory of technology and STS.Thesis Eleven, 138(1): 3–12

[21]

Feenberg, A. (2023). Marcuse’s critique of technology today.Philosophy & Social Criticism, 49(6): 672–685

[22]

Fernandes, J. G. (2021). Artificial intelligence in telemedicine. In: Lidströmer, N., & Ashrafian, H., eds. Artificial intelligence in medicine. Cham: Springer, 1219–1227.

[23]

Fowler, D. S. (2023). AI in higher education: Academic integrity, harmony of insights, and recommendations.Journal of Ethics in Higher Education, (3): 127–143

[24]

Garcia, J. M., Kelly, T. (November 1, 2015). The economics and policy implications of infrastructure sharing and mutualisation in Africa. Available from World Bank website.

[25]

Ghana Education News Editorial Team. (December 20, 2023). How WAEC caught 2023 WASSCE candidates who used AI to answer exam questions? Available from Graphic Online website.

[26]

Gwagwa, A., Kraemer-Mbula, E., Rizk, N., Rutenberg, I., de Beer, J. (2020). Artificial intelligence (AI) deployments in Africa: Benefits, challenges and policy dimensions.The African Journal of Information and Communication, 26: 1–28

[27]

Guenduez, A. A., Mettler, T. (2023). Strategically constructed narratives on artificial intelligence: What stories are told in governmental artificial intelligence policies.Government Information Quarterly, 40(1): 101719

[28]

Gupta, M., Kaul, S. (2024). AI in inclusive education: A systematic review of opportunities and challenges in the Indian context.MIER Journal of Educational Studies Trends and Practices, 14(2): 429–461

[29]

Hagerty, A., Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence. arXiv Preprint, arXiv:1907.07892.

[30]

Hendrian, S. (2025). Digital inequality in the age of AI: Inequality of access and technology literacy between urban and rural communities in Southeast Asia.International Journal of Social Research, 3(3): 111–122

[31]

Jandrić, P. (2019). The postdigital challenge of critical media literacy.The International Journal of Critical Media Literacy, 1(1): 26–37

[32]

Jobin, A., Katzenbach, C. (2023). The becoming of AI: A critical perspective on the contingent formation of AI. In: Lindgren, S., ed. Handbook of critical studies of artificial intelligence. Cheltenham: Edward Elgar Publishing, 43–55.

[33]

Khan, M. S., Umer, H., Faruqe, F. (2024). Artificial intelligence for low income countries.Humanities and Social Sciences Communications, 11(1): 1422

[34]

Kiemde, S. M. A., Kora, A. D. (2022). Towards an ethics of AI in Africa: Rule of education.AI and Ethics, 2(1): 35–40

[35]

König, P. D. (2023). Citizen conceptions of democracy and support for artificial intelligence in government and politics.European Journal of Political Research, 62(4): 1280–1300

[36]

Kshetri, N. (2024). Linguistic challenges in generative artificial intelligence: Implications for low-resource languages in the developing world.Journal of Global Information Technology Management, 27(2): 95–99

[37]

Kuriachan, B., Yadam, G., Dinesh, L. (2021). AI enabled context sensitive information retrieval system. In: Gunjan, V. K., & Zurada, J. M., eds. Modern approaches in machine learning and cognitive science: A walkthrough. Cham: Springer, 203–214.

[38]

Lazarus, M. D., Truong, M., Douglas, P., Selwyn, N. (2024). Artificial intelligence and clinical anatomical education: Promises and perils.Anatomical Sciences Education, 17(2): 249–262

[39]

Leander, K. M., Burriss, S. K. (2020). Critical literacy for a posthuman world: When people read, and become, with machines.British Journal of Educational Technology, 51(4): 1262–1276

[40]

Lindgren, S. (2023). Introducing critical studies of artificial intelligence. In: Lindgren, S., ed. Handbook of critical studies of artificial intelligence. Cheltenham: Edward Elgar Publishing, 1–19.

[41]

Madaio, M., Blodgett, S. L., Mayfield, E., Dixon-Román, E. (2022). Beyond “fairness”: Structural (in) justice lenses on AI for education. In: Holmes, W., & Porayska-Pomsta, K., eds. The ethics of artificial intelligence in education. New York: Routledge, 203–239.

[42]

Mahande, R. D., Fakhri, M. M., Suwahyu, I., Sulaiman, D. R. A. (2026). Unveiling the impact of ChatGPT: Investigating self-efficacy, anxiety and motivation on student performance in blended learning environments. Journal of Applied Research in Higher Education, 18(1), 282–297.

[43]

Marcuse, H. (1964). One-dimensional man: Studies in the ideology of advanced industrial society. Boston: Beacon Press.

[44]

Margetts, H., Dorobantu, C. (2019). Rethink government with AI.Nature, 568(7751): 163–165

[45]

Mergen, A., Çetin-Kılıç, N., Özbilgin, M. F. (2025). Artificial intelligence and bias towards marginalised groups: Theoretical roots and challenges. In: Vassilopoulou, J., & Kyriakidou, O. AI and diversity in a datafied world of work: Will the future of work be inclusive? Leeds: Emerald Publishing Limited, 17–38.

[46]

Merhi, M. I. (2023). An evaluation of the critical success factors impacting artificial intelligence implementation.International Journal of Information Management, 69: 102545

[47]

Ministry of Human Resource Development (Government of India). (July 30, 2020). National education policy 2020. Available from Ministry of Education of India website.

[48]

Mohamed, S., Png, M.-T., Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence.Philosophy & Technology, 33(4): 659–684

[49]

Montoya, L., Rivas, P. (2019). Government AI readiness meta-analysis for Latin America and the Caribbean. In: Proceedings of the 2019 IEEE International Symposium on Technology and Society. Medford: IEEE, 1–8.

[50]

Ndlovu-Gatsheni, S. J. (2021). The cognitive empire, politics of knowledge and African intellectual productions: Reflections on struggles for epistemic freedom and resurgence of decolonisation in the twenty-first century.Third World Quarterly, 42(5): 882–901

[51]

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51.

[52]

Nyaaba, M., Kyeremeh, P., Majialuwe, E. K., Owusu-Fordjour, C., Asebiga, E., A-Ingkonge, B. (2024). Generative AI in academic research: Awareness, gender usage patterns, and views among pre-service teachers.Journal of AI Research, 8(1): 45–60

[53]

Ofosu-Asare, Y. (2025). Cognitive imperialism in artificial intelligence: Counteracting bias with indigenous epistemologies.AI & Society, 40(4): 3045–3061

[54]

Ojo, A., Mellouli, S., Ahmadi Zeleti, F. A. (2019). A realist perspective on AI-era public management. In: Proceedings of the 20th Annual International Conference on Digital Government Research. Dubai: ACM, 159–170.

[55]

OECD . (May 22, 2019). OECD AI principles. Available from OECD website.

[56]

Ouchchy, L., Coin, A., Dubljević, V. (2020). AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media.AI & Society, 35(4): 927–936

[57]

Owoc, M. L., Sawicka, A., Weichbroth, P. (2021). Artificial intelligence technologies in education: Benefits, challenges and strategies of implementation. In: Proceedings of the 7th IFIP International Workshop on Artificial Intelligence for Knowledge Management. Macao: Springer, 37–58.

[58]

Oxford Insights, the International Development Research Centre. (September 28, 2020). Government AI readiness index 2020. Available from Oxford Insights website.

[59]

Pereira, V., Hadjielias, E., Christofi, M., Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective.Human Resource Management Review, 33(1): 100857

[60]

Pilling, F., Coulton, P. (2019). Forget the singularity, its mundane artificial intelligence that should be our immediate concern.The Design Journal, 22(s1): 1135–1146

[61]

Popenici, S. (2022). Artificial intelligence and learning futures: Critical narratives of technology and imagination in higher education. New York: Routledge.

[62]

Rudolph, J., Ismail, M. F. B. M., Popenici, S. (2024). Higher education’s generative artificial intelligence paradox: The meaning of chatbot mania.Journal of University Teaching and Learning Practice, 21(6): 1–35

[63]

Ryan, M., de Roo, N., Wang, H., Blok, V., Atik, C. (2025). AI through the looking glass: An empirical study of structural social and ethical challenges in AI.AI & Society, 40(5): 3891–3907

[64]

Santos, B. D. S. (2008). Another knowledge is possible: Beyond Northern epistemologies. New York: Verso Books.

[65]

Segbenya, M., Bervell, B., Minadzi, V. M., Somuah, B. A. (2022). Modelling the perspectives of distance education students towards online learning during COVID-19 pandemic.Smart Learning Environments, 9(1): 13

[66]

Sidorkin, A. M. (2025). Artificial intelligence: Why is it our problem.Educational Philosophy and Theory, 57(14): 1215–1220

[67]

Stokel-Walker, C. (December 9, 2022). AI bot ChatGPT writes smart essays—Should professors worry? Available from Nature website.

[68]

Tahir, A., Tahir, A. (2023). AI-driven advancements in ESL learner autonomy: Investigating student attitudes towards virtual assistant usability.Linguistic Forum—A Journal of Linguistics, 5(2): 50–56

[69]

Thompson, M. J. (2025). Critical theory and radical psychoanalysis: Rethinking the Marcuse–Fromm debate. Theory, Culture & Society, 42(3), 59–74.

[70]

Toll, D., Lindgren, I., Melin, U., Madsen, C. Ø. (2019). Artificial intelligence in Swedish policies: Values, benefits, considerations and risks. In: Proceedings of the 18th IFIP WG 8.5 International Conference on Electronic Government. San Benedetto del Tronto: Springer, 301–310.

[71]

UNESCO . (2021). Recommendation on the ethics of artificial intelligence. Paris: UNESCO.

[72]

van Norren, D. E. (2023). The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective. Journal of Information, Communication and Ethics in Society, 21(1), 112–128.

[73]

Xie, Y., Wu, S. E., Chakravarty, S. (2023). AI meets AI: AI and academic integrity—A survey on mitigating AI-assisted cheating in computing education. In: Proceedings of the 24th Annual Conference on Information Technology Education. Marietta: ACM, 79–83.

[74]

Zawacki-Richter, O., Marín, V. I., Bond, M., Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators.International Journal of Educational Technology in Higher Education, 16(1): 1–27

RIGHTS & PERMISSIONS

Higher Education Press

PDF (1029KB)

1557

Accesses

0

Citation

Detail

Sections
Recommended

/